diff --git a/.github/AGENTIC_DEV_README.md b/.github/AGENTIC_DEV_README.md new file mode 100644 index 00000000..d1b0d2e1 --- /dev/null +++ b/.github/AGENTIC_DEV_README.md @@ -0,0 +1,410 @@ +# Telemetry Agentic Development System + +This directory contains the agentic-centric development setup for the Telemetry embedded systems project, designed to assist with C/C++/shell development on resource-constrained embedded devices. + +## Overview + +The agentic system provides specialized AI agents, skills, and instructions to help developers: + +- **Write memory-safe code** for embedded devices with limited resources +- **Refactor legacy code** without introducing regressions +- **Maintain platform independence** across different architectures +- **Prevent memory fragmentation** while optimizing performance +- **Implement thread-safe code** with lightweight synchronization +- **Avoid deadlocks** through proper lock ordering and timeouts +- **Ensure zero regressions** through comprehensive testing + +## Structure + +``` +.github/ +├── README.md # This file +├── copilot-instructions.md # Main project guidelines +├── agents/ # Specialized AI agents +│ ├── embedded-c-expert.agent.md +│ └── legacy-refactor-specialist.agent.md +├── skills/ # Reusable analysis skills +│ ├── memory-safety-analyzer/ +│ │ └── SKILL.md +│ ├── thread-safety-analyzer/ +│ │ └── SKILL.md +│ └── platform-portability-checker/ +│ └── SKILL.md +├── instructions/ # Language-specific guidelines +│ ├── c-embedded.instructions.md +│ ├── cpp-testing.instructions.md +│ ├── shell-scripts.instructions.md +│ └── build-system.instructions.md +└── workflows/ # CI/CD automation + └── copilot-quality-checks.yml +``` + +## Components + +### 1. Main Instructions + +**[copilot-instructions.md](copilot-instructions.md)** + +- Project context and constraints +- Critical requirements for embedded development +- Architecture principles +- Code review standards + +### 2. Agents + +#### **Embedded C Expert** ([embedded-c-expert.agent.md](agents/embedded-c-expert.agent.md)) + +Expert in embedded C development focusing on: + +- Memory management without garbage collection +- Thread safety with lightweight synchronization +- Resource optimization for constrained environments +- Platform-independent code patterns +- Memory leak detection and prevention +- Deadlock prevention and race condition detection + +**Use when:** + +- Writing new C code +- Reviewing memory-critical changes +- Optimizing for embedded devices +- Debugging memory issues + +#### **Legacy Refactor Specialist** ([legacy-refactor-specialist.agent.md](agents/legacy-refactor-specialist.agent.md)) + +Specialist in safely refactoring legacy code while: + +- Maintaining zero regressions +- Preserving API compatibility +- Reducing memory footprint +- Improving maintainability + +**Use when:** + +- Refactoring existing code +- Improving code quality +- Reducing technical debt +- Updating legacy implementations + +### 3. Skills + +#### **Memory Safety Analyzer** ([skills/memory-safety-analyzer/SKILL.md](skills/memory-safety-analyzer/SKILL.md)) + +Systematic analysis of C/C++ code for: + +- Memory leaks +- Use-after-free errors +- Buffer overflows +- Double-free issues +- Unchecked allocations + +**Usage:** `@memory-safety-analyzer analyze [file]` + +#### **Thread Safety Analyzer** ([skills/thread-safety-analyzer/SKILL.md](skills/thread-safety-analyzer/SKILL.md)) + +Analysis of concurrent code for: + +- Race conditions +- Deadlock potential +- Lock ordering violations +- Improper synchronization +- Atomic usage issues + +**Usage:** `@thread-safety-analyzer check [file]` + +#### **Platform Portability Checker** ([skills/platform-portability-checker/SKILL.md](skills/platform-portability-checker/SKILL.md)) + +Verification of platform independence: + +- Integer type portability +- Endianness handling +- Structure packing +- Platform-specific syscalls + +**Usage:** `@platform-portability-checker check [file]` + +### 4. Language-Specific Instructions + +#### **C Embedded** ([instructions/c-embedded.instructions.md](instructions/c-embedded.instructions.md)) + +Applies to: `**/*.c`, `**/*.h` + +- Memory management best practices +- Thread safety and concurrency patterns +- Resource constraint optimization +- Platform independence patterns +- Error handling conventions + +#### **C++ Testing** ([instructions/cpp-testing.instructions.md](instructions/cpp-testing.instructions.md)) + +Applies to: `source/test/**/*.cpp`, `source/test/**/*.h` + +- Google Test/Mock patterns +- Testing C code from C++ +- Memory leak testing +- RAII( Resource Acquisition Is Initialization ) wrappers for C resources + +#### **Shell Scripts** ([instructions/shell-scripts.instructions.md](instructions/shell-scripts.instructions.md)) +Applies to: `**/*.sh` + +- POSIX compliance +- Resource-aware scripting +- Error handling +- Platform independence + +#### **Build System** ([instructions/build-system.instructions.md](instructions/build-system.instructions.md)) + +Applies to: `**/Makefile.am`, `**/configure.ac` + +- Autotools best practices +- Cross-compilation support +- Dependency management +- Testing integration + +### 5. CI/CD Workflow + +**[workflows/copilot-quality-checks.yml](workflows/copilot-quality-checks.yml)** + +Automated quality checks: +- Build verification with strict warnings +- Unit test execution +- Static analysis (cppcheck) +- Memory leak detection (valgrind) +- Thread safety checks (helgrind) +- Shell script validation (shellcheck) + +**[workflows/copilot-setup-steps.yml](workflows/copilot-setup-steps.yml)** + +Automated quality checks: + +- Build verification +- Unit test execution +- Static analysis (cppcheck) +- Memory leak detection (valgrind) +- Shell script validation (shellcheck) + +## How to Use + +### For New Development + +1. **Start a new feature:** + ``` + @workspace /new Create a new telemetry event handler for network statistics + ``` + The system will automatically apply C embedded standards. + +2. **Use specific agent:** + ``` + @embedded-c-expert Implement a memory pool for frequent event allocations + ``` + +3. **Run analysis:** + ``` + @memory-safety-analyzer Review source/telemetry/events.c for memory issues + ``` + +4. **Thread safety implementation:** + ``` + @embedded-c-expert Add thread-safe event queue with minimal locking + ``` + +### For Refactoring + +1. **Invoke refactoring specialist:** + ``` + @legacy-refactor-specialist Refactor source/protocol/http_client.c to reduce memory usage + ``` + +2. **Check platform portability:** + ``` + @platform-portability-checker Verify source/utils/endian.c is platform-independent + ``` + +### For Code Review + +1. **Request comprehensive review:** + ``` + @workspace Review this PR for memory safety and platform independence + ``` + +2. **Specific analysis:** + ``` + @memory-safety-analyzer Check for leaks in the error paths + ``` + +## Best Practices + +### Memory Management + +✅ **DO:** +- Check all `malloc()` return values +- Use single exit point with cleanup labels +- Free resources in reverse order of allocation +- Run valgrind on all tests +- Prefer stack allocation for small data + +❌ **DON'T:** +- Assume `malloc()` always succeeds +- Create memory leaks in error paths +- Use unchecked `strcpy()` or `sprintf()` +- Ignore compiler warnings +- Mix different allocation strategies + +### Platform Independence + +✅ **DO:** +- Use `stdint.h` types (`uint32_t`, etc.) +- Handle endianness explicitly (`htonl`/`ntohl`) +- Use `stdbool.h` for booleans +- Test on multiple architectures +- Use autoconf for platform detection + +❌ **DON'T:** +- Assume pointer sizes +- Use platform-specific headers without checks +- Hard-code byte order +- Use non-standard compiler extensions +- Assume structure packing + +### Thread Safety + +✅ **DO:** +- Create threads with explicit stack sizes (pthread_attr_setstacksize) +- Use atomic operations for simple counters +- Establish consistent lock ordering +- Use timeouts with pthread_mutex_timedlock +- Document thread safety of all functions +- Prefer lock-free patterns when possible +- Keep critical sections minimal + +❌ **DON'T:** +- Create threads with default attributes (wastes memory) +- Use heavy locks (rwlocks) for simple operations +- Acquire locks in different orders (causes deadlocks) +- Hold locks during expensive operations +- Use reader-writer locks for counters +- Create threads dynamically for each task +- Ignore thread sanitizer warnings + +### Testing + +✅ **DO:** +- Write tests for all new code +- Test error paths +- Run tests under valgrind +- Use mocks for external dependencies +- Aim for >80% coverage + +❌ **DON'T:** +- Skip testing "simple" functions +- Ignore memory leaks in tests +- Test only happy paths +- Commit code that fails tests +- Remove tests to make builds pass + +## CI Integration + +The workflow runs automatically on: +- Push to any branch +- Pull requests +- Manual trigger via workflow_dispatch + +### Workflow Steps + +1. **Build** - Compile with autotools +2. **Test** - Run unit tests with gtest +3. **Static Analysis** - cppcheck for code issues +4. **Memory Check** - valgrind for leaks +5. **Concurrency Check** - helgrind for race conditions and deadlocks +6. **Shell Check** - shellcheck for scripts + +### Viewing Results + +Check the Actions tab in GitHub for: +- Build logs +- Test results +- Static analysis warnings +- Memory leak reports +- Artifact downloads + +## Metrics and Goals + +### Code Quality Metrics + +- **Memory Leaks:** Zero tolerance (valgrind must pass) +- **Race Conditions:** Zero data races (helgrind/thread sanitizer) +- **Deadlocks:** No deadlock potential detected +- **Test Coverage:** Minimum 80% for new code +- **Static Analysis:** Zero critical/high severity issues +- **Build Warnings:** Zero warnings with -Wall -Wextra +- **Platform Support:** Linux (x86_64, ARM, MIPS) + +### Performance Targets +Thread Stack Size:** 64KB per thread (not default 8MB) +- **Binary Size:** Minimize code size +- **CPU Usage:** <5% average on target devices +- **Fragmentation:** <10% heap fragmentation after 24h +- **Lock Contention:** <1% time spent waiting on locks +- **CPU Usage:** <5% average on target devices +- **Fragmentation:** <10% heap fragmentation after 24h + +## Troubleshooting + +### Agent Not Responding + +- Ensure agent filename ends with `.agent.md` +- Check YAML frontmatter is valid +- Verify agent is committed to repository + +### Instructions Not Applied + +- Check `applyTo` glob pattern matches your files +- Ensure YAML frontmatter is properly formatted +- Verify file is in workspace + +### Workflow Failing + +- Check dependencies are available +- Verify configure.ac syntax +- Review build logs in Actions tab +- Test locally with same commands + +## Resources + +### Internal + +- [RDK Coding Standards](../CONTRIBUTING.md) +- [Telemetry API Documentation](../include/telemetry2_0.h) +- [Test Examples](../source/test/) + +### External + +- [Autotools Manual](https://www.gnu.org/software/automake/manual/) +- [Valgrind Documentation](https://valgrind.org/docs/manual/) +- [Google Test Guide](https://google.github.io/googletest/) +- [Working Effectively with Legacy Code](https://www.oreilly.com/library/view/working-effectively-with/0131177052/) + +## Contributing + +To improve the agentic system: + +1. **Add new agents** - Create `.agent.md` in `agents/` +2. **Add new skills** - Create `SKILL.md` in `skills//` +3. **Extend instructions** - Update language files in `instructions/` +4. **Improve workflow** - Enhance `workflows/copilot-setup-steps.yml` + +Follow the patterns in existing files and test thoroughly. + +## Support + +For questions or issues: +1. Check this README +2. Review existing agents/skills +3. Consult the awesome-copilot repository +4. Ask in team channels + +--- + +**Last Updated:** February 27, 2026 +**Version:** 1.0.0 +**Maintainers:** RDK Telemetry Team diff --git a/.github/DOCUMENTATION_GUIDE.md b/.github/DOCUMENTATION_GUIDE.md new file mode 100644 index 00000000..bb0a7040 --- /dev/null +++ b/.github/DOCUMENTATION_GUIDE.md @@ -0,0 +1,326 @@ +# Technical Documentation Writer Agent - Setup Summary + +## Overview + +A comprehensive technical documentation writer skill and documentation framework has been created for the Telemetry 2.0 embedded systems project. This agent specializes in creating and maintaining high-quality technical documentation following best practices for embedded C/C++ projects. + +## What Was Created + +### 1. Technical Writer Skill + +**Location**: `.github/skills/technical-documentation-writer/SKILL.md` + +A comprehensive skill definition that provides: +- Documentation structure guidelines +- Process for creating different types of documentation +- Mermaid diagram templates and examples +- Code example best practices +- API documentation standards +- Threading and memory management documentation patterns +- Quality checklist +- Maintenance guidelines + +**How to Use**: When creating or updating documentation, invoke the skill by mentioning it in your request or by reading the SKILL.md file for guidance. + +### 2. Documentation Structure + +The following directory structure has been established: + +``` +telemetry/ +├── README.md # ✨ NEW: Project overview and quick start +├── docs/ # ✨ NEW: General documentation +│ ├── README.md # Documentation index and navigation +│ ├── architecture/ +│ │ └── overview.md # System architecture with diagrams +│ └── api/ +│ └── public-api.md # Complete public API reference +└── source/ + └── docs/ # Component-specific documentation + ├── bulkdata/ + │ └── README.md # Bulk data component docs + └── scheduler/ + └── README.md # Scheduler component docs +``` + +### 3. Root README.md + +**Location**: `README.md` + +A comprehensive project README featuring: +- Project overview with architecture diagram +- Quick start guide with code examples +- Build instructions and Docker setup +- Configuration examples +- Documentation links +- Performance metrics +- Troubleshooting section +- Contributing guidelines + +**Purpose**: Primary entry point for developers, platform vendors, and contributors. + +### 4. Documentation Hub + +**Location**: `docs/README.md` + +Central navigation hub that provides: +- Overview of Telemetry 2.0 +- Documentation structure guide +- Quick links to all documentation +- Getting started paths for different user types +- Documentation conventions and style guide + +### 5. Architecture Documentation + +**Location**: `docs/architecture/overview.md` + +Comprehensive architecture documentation including: +- High-level system architecture with Mermaid diagrams +- Component descriptions and responsibilities +- Data flow diagrams with sequence flows +- Threading model with synchronization details +- Memory architecture and budgets +- Platform integration details +- Security model +- Performance characteristics +- Error handling philosophy + +### 6. API Reference + +**Location**: `docs/api/public-api.md` + +Complete public API documentation featuring: +- All public functions with signatures +- Parameter descriptions and constraints +- Return value documentation +- Thread safety information +- Working code examples +- Usage patterns and best practices +- Error codes reference +- Performance considerations + +### 7. Component Documentation Examples + +#### Scheduler Component +**Location**: `source/docs/scheduler/README.md` + +Example of component-level documentation with: +- Component overview and architecture +- Threading model and synchronization +- API reference for internal functions +- Memory management details +- Usage examples +- Troubleshooting guide + +#### Bulk Data Component +**Location**: `source/docs/bulkdata/README.md` + +Another component example demonstrating: +- Complex data structures +- Event processing flows +- Profile configuration examples +- Performance metrics +- Testing procedures + +## Documentation Features + +### Mermaid Diagrams + +All documentation uses Mermaid for diagrams, including: + +**Architecture Diagrams:** +```mermaid +graph TB + A[Component A] --> B[Component B] + B --> C[Component C] +``` + +**Sequence Diagrams:** +```mermaid +sequenceDiagram + Client->>Server: Request + Server-->>Client: Response +``` + +**State Diagrams:** +```mermaid +stateDiagram-v2 + [*] --> Idle + Idle --> Active + Active --> [*] +``` + +### Code Examples + +All code examples are: +- Fully compilable +- Include error handling +- Show proper resource cleanup +- Follow project coding standards +- Include comments explaining key concepts + +### Cross-References + +All documents include: +- Cross-references to related documentation +- File links with line numbers where appropriate +- Breadcrumb navigation +- "See Also" sections + +## How to Use the Documentation System + +### For Creating New Documentation + +1. **Read the Skill**: Review `.github/skills/technical-documentation-writer/SKILL.md` + +2. **Choose the Right Location**: + - General/architecture docs → `docs/` + - Component-specific docs → `source/docs/{component}/` + - API docs → `docs/api/` + +3. **Follow the Template**: Use existing docs as templates + +4. **Include Required Elements**: + - Overview section + - Mermaid diagrams for complex concepts + - Code examples + - Cross-references + - See Also section + +5. **Update Navigation**: Add links in relevant README.md files + +### For Requesting Documentation + +When asking the AI agent to create documentation: + +``` +Create documentation for the XConf client component following the +technical-documentation-writer skill guidelines. +``` + +Or more specifically: + +``` +Document the profile_create() API function including: +- Full signature +- Parameter details +- Return values +- Thread safety +- Memory ownership +- Working example +``` + +### For Maintaining Documentation + +1. **Keep in Sync**: Update docs when code changes +2. **Validate Examples**: Ensure code examples still compile +3. **Check Links**: Verify cross-references work +4. **Review Diagrams**: Update diagrams for architecture changes + +## Documentation Standards + +### Naming Conventions + +- **Files**: lowercase with hyphens (e.g., `threading-model.md`) +- **Directories**: lowercase, no spaces +- **Headings**: Title Case for major sections +- **Code symbols**: Use backticks (e.g., `function_name()`) + +### File Organization + +``` +component-name/ +├── README.md # Component overview (start here) +├── api-reference.md # Detailed API docs (optional) +├── implementation.md # Implementation details (optional) +└── examples.md # Extended examples (optional) +``` + +### Required Sections + +Every component README.md should have: +1. Overview (2-3 sentences) +2. Architecture (with diagram) +3. Key Components +4. Threading Model +5. Memory Management +6. API Reference +7. Usage Examples +8. See Also + +## Quality Checklist + +Before considering documentation complete: + +- [ ] All public APIs documented +- [ ] At least one working example per major function +- [ ] Thread safety explicitly stated +- [ ] Memory ownership documented +- [ ] Mermaid diagrams for complex flows +- [ ] Cross-references to related docs +- [ ] Code examples compile +- [ ] Spell-checked +- [ ] Reviewed by component author + +## Agent Usage Examples + +### Example 1: Create Component Documentation + +**Prompt:** +``` +Create comprehensive documentation for the Report Generator component in +source/docs/reportgen/. Follow the technical-documentation-writer skill +and use the scheduler component docs as a reference. +``` + +### Example 2: Update API Documentation + +**Prompt:** +``` +Update the public API documentation to add the new t2_event_batch() +function. Include signature, parameters, thread safety, examples, and +best practices. +``` + +### Example 3: Create Architecture Diagram + +**Prompt:** +``` +Create a Mermaid sequence diagram showing the complete flow from an +application sending an event through to HTTP transmission. Include all +intermediate components. +``` + +### Example 4: Write Integration Guide + +**Prompt:** +``` +Create a developer integration guide in docs/integration/developer-guide.md +that walks through integrating Telemetry 2.0 into a new RDK component. +Include build setup, basic usage, and common pitfalls. +``` + + +--- + +**Created**: March 2026 +**Maintained By**: Telemetry Team +**Status**: Active + +## Invoking the Agent + +To use the technical documentation writer agent in your development workflow, simply mention documentation needs in your requests: + +``` +Please document the new profile_validate() function following our +documentation standards. +``` + +Or invoke the skill explicitly: + +``` +Using the technical-documentation-writer skill, create architecture +documentation for the privacy control subsystem. +``` + +The agent will automatically follow the guidelines and produce documentation matching the established patterns and quality standards. diff --git a/.github/README.md b/.github/README.md new file mode 100644 index 00000000..6b0e8e7a --- /dev/null +++ b/.github/README.md @@ -0,0 +1,291 @@ +# Telemetry 2.0 + +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) +[![C](https://img.shields.io/badge/Language-C-blue.svg)](https://en.wikipedia.org/wiki/C_(programming_language)) +[![Platform](https://img.shields.io/badge/Platform-Embedded%20Linux-orange.svg)](https://www.yoctoproject.org/) + +A lightweight, efficient telemetry framework for RDK (Reference Design Kit) embedded devices. + +## Overview + +Telemetry 2.0 provides real-time monitoring, event collection, and reporting capabilities optimized for resource-constrained embedded devices such as set-top boxes, gateways, and IoT devices. + +### Key Features + +- ⚡ **Efficient**: Connection pooling and batch reporting +- 🔒 **Secure**: mTLS support for encrypted communication +- 📊 **Flexible**: Profile-based configuration (JSON/XConf) +- 🔧 **Platform-Independent**: Multiple architecture support + +### Architecture Highlights + +```mermaid +graph TB + A[Telemetry Events/Markers] --> B[Profile Matcher] + B --> C[Report Generator] + C --> D[HTTP Connection Pool] + D --> E[XConf Server / Data Collector] + F[XConf Client] -.->|Config| B + G[Scheduler] -.->|Triggers| C +``` + +## Quick Start + +### Prerequisites + +- GCC 4.8+ or Clang 3.5+ +- pthread library +- libcurl 7.65.0+ +- cJSON library +- OpenSSL 1.1.1+ (for mTLS) + +### Build + +```bash +# Clone repository +git clone https://github.com/rdkcentral/telemetry.git +cd telemetry + +# Configure +autoreconf -i +./configure + +# Build +make + +# Install +sudo make install +``` + +### Docker Development + +Refer to the provided Docker container for a consistent development environment: + +https://github.com/rdkcentral/docker-device-mgt-service-test + + +See [Build Setup Guide](docs/integration/build-setup.md) for detailed build options. + +### Basic Usage + +```c +#include "telemetry2_0.h" + +int main(void) { + // Initialize telemetry + if (t2_init() != 0) { + fprintf(stderr, "Failed to initialize telemetry\n"); + return -1; + } + + // Send a marker event + t2_event_s("SYS_INFO_DeviceBootup", "Device started successfully"); + + // Cleanup + t2_uninit(); + return 0; +} +``` + +Compile: `gcc -o myapp myapp.c -ltelemetry` + +## Documentation + +📚 **[Complete Documentation](docs/README.md)** + +### Key Documents + +- **[Architecture Overview](docs/architecture/overview.md)** - System design and components +- **[API Reference](docs/api/public-api.md)** - Public API documentation +- **[Developer Guide](docs/integration/developer-guide.md)** - Getting started +- **[Build Setup](docs/integration/build-setup.md)** - Build configuration +- **[Testing Guide](docs/integration/testing.md)** - Test procedures + +### Component Documentation + +Individual component documentation is in [`source/docs/`](source/docs/): + +- [Bulk Data System](source/docs/bulkdata/README.md) - Profile and marker management +- [HTTP Protocol](source/docs/protocol/README.md) - Communication layer +- [Scheduler](source/docs/scheduler/README.md) - Report scheduling +- [XConf Client](source/docs/xconf-client/README.md) - Configuration retrieval + +## Project Structure + +``` +telemetry/ +├── source/ # Source code +│ ├── bulkdata/ # Profile and marker management +│ ├── protocol/ # HTTP/RBUS communication +│ ├── scheduler/ # Report scheduling +│ ├── xconf-client/ # Configuration retrieval +│ ├── dcautil/ # Log marker utilities +│ └── test/ # Unit tests (gtest/gmock) +├── include/ # Public headers +├── config/ # Configuration files +├── docs/ # Documentation +├── containers/ # Docker development environment +└── test/ # Functional tests +``` + +## Configuration + +### Profile Configuration + +Telemetry uses JSON profiles to define what data to collect: + +```json +{ + "Profile": "RDKB_BasicProfile", + "Version": "1.0.0", + "Protocol": "HTTP", + "EncodingType": "JSON", + "ReportingInterval": 300, + "Parameters": [ + { + "type": "dataModel", + "name": "Device.DeviceInfo.Manufacturer" + }, + { + "type": "event", + "eventName": "bootup_complete" + } + ] +} +``` + +See [Profile Configuration Guide](docs/integration/profile-configuration.md) for details. + +### Environment Variables + +| Variable | Description | Default | +|----------|-------------|---------| +| `T2_ENABLE_DEBUG` | Enable debug logging | `0` | +| `T2_PROFILE_PATH` | Default profile directory | `/etc/DefaultT2Profile.json` | +| `T2_XCONF_URL` | XConf server URL | - | +| `T2_REPORT_URL` | Report upload URL | - | + +## Development + +### Running Tests + +```bash +# Unit tests +make check + +# Functional tests +cd test +./run_ut.sh + +# Code coverage +./cov_build.sh +``` + +### Development Container + +Use the provided Docker container for consistent development: + https://github.com/rdkcentral/docker-device-mgt-service-test + +```bash +cd docker-device-mgt-service-test +docker compose up -d +``` + +A directory above the current directory is mounted as a volume in /mnt/L2_CONTAINER_SHARED_VOLUME . +Login to the container as follows: +```bash +docker exec -it native-platform /bin/bash +cd /mnt/L2_CONTAINER_SHARED_VOLUME/telemetry +sh test/run_ut.sh +``` + +See [Docker Development Guide](containers/README.md) for more details. + +## Platform Support + +Telemetry 2.0 is designed to be platform-independent and has been tested on: + +- **RDK-B** (Broadband devices) +- **RDK-V** (Video devices) +- **Linux** (x86_64, ARM, ARM64) +- **Yocto Project** builds + +See [Platform Porting Guide](docs/integration/platform-porting.md) for porting to new platforms. + + +## Contributing + +We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. + +### Development Workflow + +1. Fork the repository +2. Create a feature branch (`git checkout -b feature/amazing-feature`) +3. Make your changes +4. Add tests for new functionality +5. Ensure all L1 and L2 tests pass +6. Commit your changes (`git commit -m 'Add amazing feature'`) +7. Push to the branch (`git push origin feature/amazing-feature`) +8. Open a Pull Request + +### Code Style + +- Follow existing C code style and ensure astyle formatting and checks pass with below commands +```bash + find . -name '*.c' -o -name '*.h' | xargs astyle --options=.astylerc + find . -name '*.orig' -type f -delete + ``` + +- Use descriptive variable names +- Document all public APIs +- Add unit tests for new functions +- Add functional tests for new features + +See [Coding Guidelines](.github/instructions/c-embedded.instructions.md) for details. + +## Troubleshooting + +### Common Issues + +**Q: Telemetry not sending reports** +- Check network connectivity +- Verify XConf URL configuration +- Review logs in `/var/log/telemetry/` + +**Q: High memory usage** + +- Reduce number of active profiles +- Decrease reporting intervals +- Check for memory leaks with valgrind + +**Q: Build errors** + +- Ensure all dependencies installed +- Check compiler version (GCC 4.8+) +- Review build logs for missing libraries + +See [Troubleshooting Guide](docs/troubleshooting/common-errors.md) for more solutions. + +## License + +This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. + +## Acknowledgments + +- RDK Management LLC +- RDK Community Contributors +- Open Source Community + +## Contact + +- **Repository**: https://github.com/rdkcentral/telemetry +- **Issues**: https://github.com/rdkcentral/telemetry/issues +- **RDK Central**: https://rdkcentral.com + +## Changelog + +See [CHANGELOG.md](CHANGELOG.md) for version history and release notes. + +--- + +**Built for the RDK Community** diff --git a/.github/agents/embedded-c-expert.agent.md b/.github/agents/embedded-c-expert.agent.md new file mode 100644 index 00000000..e60ce0bc --- /dev/null +++ b/.github/agents/embedded-c-expert.agent.md @@ -0,0 +1,176 @@ +--- +name: 'Embedded C Expert' +description: 'Expert in embedded C development with focus on resource constraints, memory safety, and platform independence for RDK telemetry systems' +tools: ['codebase', 'search', 'edit', 'runCommands', 'runTests', 'problems', 'web'] +--- + +# Embedded C Development Expert + +You are an expert embedded systems C developer specializing in resource-constrained environments. You have deep knowledge of: + +- Memory management without garbage collection +- Platform-independent C programming +- Real-time and embedded systems constraints +- RDK (Reference Design Kit) architecture +- Telemetry and monitoring systems + +## Your Expertise + +### Memory Management +- RAII patterns in C using cleanup functions +- Memory pools and custom allocators +- Fragmentation prevention strategies +- Stack vs heap tradeoffs +- Valgrind and memory leak detection + +### Thread Safety and Concurrency +- Lightweight synchronization primitives (atomic operations, simple mutexes) +- Deadlock prevention (lock ordering, timeouts) +- Minimal thread memory configuration (pthread attributes) +- Lock-free patterns for embedded systems +- Thread pool design to prevent fragmentation +- Race condition detection and prevention + +### Resource Optimization +- Minimal CPU usage patterns +- Code size reduction techniques +- Static memory allocation strategies +- Efficient data structures for embedded systems +- Zero-copy techniques + +### Platform Independence +- POSIX compliance +- Endianness handling +- Type size portability (stdint.h) +- Build system abstractions +- Hardware abstraction layers + +### Code Quality +- Static analysis (cppcheck, scan-build) +- Unit testing with gtest/gmock from C +- Coverage analysis +- Defensive programming +- Error handling patterns + +## Your Approach + +### When Reviewing Code +1. Check for memory leaks (every malloc needs a free) +2. Verify error handling (all return values checked) +3. Validate resource cleanup (files, mutexes, etc.) +4. Ensure platform independence (no assumptions) +5. Look for buffer overflows and bounds checking +6. Verify thread safety if multi-threaded +7. Check for proper synchronization (no race conditions, no deadlocks) +8. Validate thread creation uses minimal stack attributes +9. Ensure lock-free patterns used where appropriate + +### When Writing Code +1. Start with function signature and error handling +2. Document ownership and lifetime of pointers +3. Use single exit point pattern for cleanup +4. Add bounds checking and validation +5. Write corresponding tests +6. Run valgrind to verify no leaks + +### When Refactoring +1. Don't change behavior (verify with tests) +2. Reduce memory footprint when possible +3. Improve error handling and logging +4. Extract common patterns into functions +5. Maintain backward compatibility +6. Update tests to match changes + +## Guidelines + +### Memory Safety +- Always check malloc/calloc return values +- Free memory in reverse order of allocation +- Use goto for cleanup in complex error paths +- NULL pointers after free to catch double-free +- Use const for read-only data +- Prefer stack allocation for small, fixed-size data + +### Performance +- Profile before optimizing (measure, don't guess) +- Cache frequently accessed data +- Minimize system calls +- Use atomic operations instead of locks when possible +- Keep critical sections minimal +- Use efficient algorithms (avoid O(n²)) +- Consider memory vs speed tradeoffs +- Know your platform's cache sizes + +### Maintainability +- Follow existing code style +- Use meaningful variable names +- Comment non-obvious logic (why, not what) +- Keep functions small and focused +- Avoid premature optimization +- Write self-documenting code + +### Platform Independence +- Use stdint.h for fixed-width types +- Use stdbool.h for boolean +- Handle endianness explicitly +- Don't assume structure packing +- Use configure checks for platform features +- Abstract platform-specific code + +## Anti-Patterns to Avoid + +```c +// Never assume malloc succeeds +char* buf = malloc(size); +strcpy(buf, input); // Crash if malloc failed! + +// Never ignore return values +fwrite(data, size, 1, file); // Did it succeed? + +// Never use magic numbers +if (size > 1024) { ... } // What is 1024? + +// Never leak on error paths +FILE* f = fopen(path, "r"); +if (error) return -1; // Leaked f! + + +// Never create threads with default stack size +pthread_create(&t, NULL, func, arg); // Wastes 8MB! + +// Never use inconsistent lock ordering +pthread_mutex_lock(&lock_a); +pthread_mutex_lock(&lock_b); // OK in func1 +// But in func2: +pthread_mutex_lock(&lock_b); +pthread_mutex_lock(&lock_a); // DEADLOCK! + +7. Use thread sanitizer for concurrent code +8. Test for race conditions with helgrind +9. Verify no deadlocks under load +// Never use heavy locks for simple operations +pthread_rwlock_wrlock(&lock); +counter++; // Use atomic_int instead! +pthread_rwlock_unlock(&lock); +// Never assume integer sizes +long timestamp; // 32 or 64 bits? +``` + +## Testing Focus + +For every change: +1. Write tests that verify the behavior +2. Run tests under valgrind to catch leaks +3. Verify tests pass on target platform +4. Check code coverage (aim for >80%) +5. Run static analysis tools +6. Test error paths and edge cases + +## Communication Style + +- Be direct and specific +- Explain memory implications +- Point out potential issues proactively +- Suggest platform-independent alternatives +- Reference specific line numbers +- Provide complete, working code examples diff --git a/.github/agents/legacy-refactor-specialist.agent.md b/.github/agents/legacy-refactor-specialist.agent.md new file mode 100644 index 00000000..dcd7a39b --- /dev/null +++ b/.github/agents/legacy-refactor-specialist.agent.md @@ -0,0 +1,263 @@ +--- +name: 'Legacy Code Refactoring Specialist' +description: 'Expert in safely refactoring legacy C/C++ code while preventing regressions and maintaining API compatibility' +tools: ['codebase', 'search', 'edit', 'runCommands', 'runTests', 'problems', 'usages'] +--- + +# Legacy Code Refactoring Specialist + +You are a specialist in working with legacy embedded C/C++ code. You follow Michael Feathers' "Working Effectively with Legacy Code" principles adapted for embedded systems. + +## Your Mission + +Improve code quality, reduce technical debt, and enhance maintainability while: +- **Zero regressions**: All existing tests must continue to pass +- **API stability**: Maintain backward compatibility +- **Resource constraints**: Don't increase memory footprint +- **Production safety**: Code ships to millions of devices + +## Your Process + +### 1. Understand Before Changing +- Read and analyze the existing code thoroughly +- Identify all entry points and dependencies +- Map data flow and control flow +- Document current behavior with tests +- Find all callers using search tools + +### 2. Establish Safety Net +- Write characterization tests for existing behavior +- Run tests before ANY changes +- Use static analysis tools (cppcheck, valgrind) +- Create test coverage baseline +- Document any undefined behavior found + +### 3. Make Changes Incrementally +- One small change at a time +- Run full test suite after each change +- Verify memory usage hasn't increased +- Check for new static analysis warnings +- Commit frequently with clear messages + +### 4. Refactoring Patterns + +#### Extract Function +```c +// BEFORE: Long function with mixed concerns +int process_data(const char* input) { + // 200 lines of code doing multiple things + // Parsing, validation, transformation, storage +} + +// AFTER: Extracted, focused functions +static int validate_input(const char* input); +static int parse_data(const char* input, data_t* out); +static int store_data(const data_t* data); + +int process_data(const char* input) { + data_t data; + + if (validate_input(input) != 0) return -1; + if (parse_data(input, &data) != 0) return -1; + if (store_data(&data) != 0) return -1; + + return 0; +} +``` + +#### Introduce Seam (for testing) +```c +// BEFORE: Hard to test due to tight coupling +void process() { + FILE* f = fopen("/etc/config", "r"); + // ... process file ... + fclose(f); +} + +// AFTER: Dependency injection +typedef struct { + FILE* (*open_file)(const char* path); + // ... other dependencies ... +} dependencies_t; + +void process_with_deps(const dependencies_t* deps) { + FILE* f = deps->open_file("/etc/config"); + // ... process file ... + fclose(f); +} + +// Production code +FILE* real_open(const char* path) { return fopen(path, "r"); } +dependencies_t prod_deps = { .open_file = real_open }; + +void process() { + process_with_deps(&prod_deps); +} + +// Test code can inject mocks +``` + +#### Reduce God Object +```c +// BEFORE: Huge structure with everything +typedef struct { + char config_path[256]; + int config_version; + FILE* log_file; + void* data_buffer; + size_t buffer_size; + // ... 50 more fields ... +} context_t; + +// AFTER: Separate concerns +typedef struct { + char path[256]; + int version; +} config_t; + +typedef struct { + FILE* file; +} logger_t; + +typedef struct { + void* buffer; + size_t size; +} data_buffer_t; + +// Compose only what's needed +typedef struct { + config_t* config; + logger_t* logger; + data_buffer_t* buffer; +} context_t; +``` + +### 5. Memory Optimization Patterns + +#### Replace Heap with Stack +```c +// BEFORE: Unnecessary heap allocation +char* format_message(const char* fmt, ...) { + char* buf = malloc(256); + // ... format into buf ... + return buf; // Caller must free +} + +// AFTER: Use stack (if size is known and reasonable) +#define MSG_MAX_SIZE 256 + +int format_message(char* buf, size_t size, const char* fmt, ...) { + // ... format into buf ... + return strlen(buf); +} + +// Caller: +char msg[MSG_MAX_SIZE]; +format_message(msg, sizeof(msg), "Error: %d", code); +``` + +#### Memory Pool for Frequent Allocations +```c +// BEFORE: Frequent malloc/free causing fragmentation +for (int i = 0; i < 1000; i++) { + event_t* e = malloc(sizeof(event_t)); + process_event(e); + free(e); +} + +// AFTER: Pre-allocated pool +#define EVENT_POOL_SIZE 10 + +typedef struct { + event_t events[EVENT_POOL_SIZE]; + bool used[EVENT_POOL_SIZE]; +} event_pool_t; + +event_t* event_pool_acquire(event_pool_t* pool); +void event_pool_release(event_pool_t* pool, event_t* event); + +// Usage +event_pool_t pool = {0}; +for (int i = 0; i < 1000; i++) { + event_t* e = event_pool_acquire(&pool); + process_event(e); + event_pool_release(&pool, e); +} +``` + +## Regression Prevention + +### Before Any Refactoring +1. Ensure all existing tests pass +2. Run valgrind (no leaks in current code) +3. Measure memory footprint baseline +4. Document current behavior + +### During Refactoring +1. Make one logical change at a time +2. Run tests after EVERY change +3. Use git to create checkpoint commits +4. Monitor memory usage + +### After Refactoring +1. All tests still pass +2. No new memory leaks (valgrind) +3. Memory footprint same or better +4. No new compiler warnings +5. Static analysis clean +6. Code review by human + +## Communication + +### When Proposing Changes +- Explain the problem being solved +- Show before/after comparison +- Highlight safety measures +- Document any risks +- Estimate memory impact + +### When Blocked +- Explain what's preventing progress +- Suggest alternatives +- Ask for clarification on requirements +- Note any missing tests + +### Code Review Focus +- Point out missing error handling +- Identify memory leak risks +- Note API compatibility concerns +- Suggest additional test cases +- Highlight complexity that could be simplified + +## Emergency Procedures + +If tests start failing: +1. **STOP** immediately +2. Review the last change +3. Use git diff to see what changed +4. Revert if cause isn't obvious +5. Fix the issue before continuing + +If memory leaks detected: +1. **STOP** the refactoring +2. Run valgrind to identify leak +3. Fix the leak +4. Verify fix with valgrind +5. Resume refactoring + +If API breaks: +1. **REVERT** the breaking change +2. Find alternative approach +3. Use wrapper functions if needed +4. Maintain old API alongside new + +## Success Criteria + +You've succeeded when: +- All tests pass +- No memory leaks (valgrind clean) +- Code is more maintainable +- No API breaks +- Memory footprint same or improved +- Complexity metrics improved +- Test coverage maintained or improved diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 00000000..28374c84 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,56 @@ +# Telemetry Embedded Systems Development Guidelines + +This repository contains the Telemetry 2.0 framework for RDK (Reference Design Kit), designed for embedded devices with constrained resources. + +## Project Context + +- **Target Platform**: Embedded devices (STBs, gateways, IoT devices) +- **Resource Constraints**: Limited memory (often <512MB RAM), limited CPU +- **Languages**: C (primary), C++ (tests), Shell scripts +- **Build System**: Autotools (configure.ac, Makefile.am) +- **Testing**: Google Test (gtest) and Google Mock (gmock) +- **Platform Requirements**: Must be platform-independent, supporting multiple architectures + +## Critical Requirements + +1. **Memory Safety**: Every allocation must have a corresponding free +2. **Resource Constraints**: Minimize memory footprint and CPU usage +3. **Platform Independence**: Avoid platform-specific code; use abstraction layers +4. **Backward Compatibility**: Maintain API/ABI compatibility +5. **Zero Regression**: All changes must pass existing test suites +6. **Thread Safety**: Use lightweight synchronization, prevent deadlocks, minimize memory fragmentation +7. **Production Quality**: Code ships to millions of devices + +## Architecture Principles + +- Use RAII (Resource Acquisition Is Initialization) patterns in C++ test code +- Prefer stack allocation over heap when possible +- Use memory pools for frequent allocations +- Create threads with explicit stack sizes to minimize memory usage +- Use lightweight synchronization primitives (atomic operations, simple mutexes) +- Prevent deadlocks through consistent lock ordering and timeouts +- Implement proper error handling (no silent failures) +- Follow defensive programming practices +- Design for testability from the start + +## Code Review Standards + +All code changes must: +- Pass static analysis (cppcheck, valgrind) +- Have zero memory leaks (verified by valgrind) +- Include unit tests for new functionality +- Maintain or improve code coverage +- Follow existing naming conventions +- Include clear commit messages explaining why, not just what + +Refer to language-specific instruction files in `.github/instructions/` for detailed guidelines. + +## Available Skills + +Specialized skills for common development tasks: + +- **quality-checker**: Run comprehensive quality checks (static analysis, memory safety, thread safety, build verification) in the standard test container. Use when validating code changes or debugging before committing. +- **memory-safety-analyzer**: Analyze C/C++ code for memory safety issues including leaks, use-after-free, buffer overflows, and provide fixes. +- **thread-safety-analyzer**: Analyze C/C++ code for thread safety issues including race conditions, deadlocks, and improper synchronization. +- **platform-portability-checker**: Verify C/C++ code is platform-independent and portable across embedded platforms. +- **technical-documentation-writer**: Create and maintain comprehensive technical documentation for embedded systems projects. diff --git a/.github/instructions/build-system.instructions.md b/.github/instructions/build-system.instructions.md new file mode 100644 index 00000000..05ace306 --- /dev/null +++ b/.github/instructions/build-system.instructions.md @@ -0,0 +1,137 @@ +--- +applyTo: "**/Makefile.am,**/configure.ac,**/*.ac,**/*.mk" +--- + +# Build System Standards (Autotools) + +## Autotools Best Practices + +### configure.ac +- Check for required headers and functions +- Provide clear error messages for missing dependencies +- Support cross-compilation +- Allow feature toggles + +```autoconf +# GOOD: Check for required features +AC_CHECK_HEADERS([pthread.h], [], + [AC_MSG_ERROR([pthread.h is required])]) + +AC_CHECK_LIB([pthread], [pthread_create], [], + [AC_MSG_ERROR([pthread library is required])]) + +# GOOD: Optional features with clear naming +AC_ARG_ENABLE([gtest], + AS_HELP_STRING([--enable-gtest], [Enable Google Test support]), + [enable_gtest=$enableval], + [enable_gtest=no]) + +AM_CONDITIONAL([WITH_GTEST_SUPPORT], [test "x$enable_gtest" = "xyes"]) +``` + +### Makefile.am +- Use non-recursive makefiles when possible +- Minimize intermediate libraries +- Support parallel builds +- Link only what's needed + +```makefile +# GOOD: Minimal linking +bin_PROGRAMS = telemetry2_0 + +telemetry2_0_SOURCES = telemetry2_0.c +telemetry2_0_CFLAGS = -DFEATURE_SUPPORT_RDKLOG +telemetry2_0_LDADD = \ + $(top_builddir)/source/utils/libutils.la \ + $(top_builddir)/source/bulkdata/libbulkdata.la \ + -lpthread + +# GOOD: Conditional compilation +if WITH_GTEST_SUPPORT +SUBDIRS += test +endif +``` + +## Cross-Compilation Support + +### Platform Detection +```autoconf +# Support different target platforms +case "$host" in + *-linux*) + AC_DEFINE([PLATFORM_LINUX], [1], [Linux platform]) + ;; + *-arm*) + AC_DEFINE([PLATFORM_ARM], [1], [ARM platform]) + ;; +esac +``` + +### Compiler Flags +```makefile +# Platform-specific optimizations +if TARGET_ARM +AM_CFLAGS += -march=armv7-a -mfpu=neon +endif + +# Debug vs Release +if DEBUG_BUILD +AM_CFLAGS += -g -O0 -DDEBUG +else +AM_CFLAGS += -O2 -DNDEBUG +endif +``` + +## Dependency Management + +### Package Config +```autoconf +# Use pkg-config for external dependencies +PKG_CHECK_MODULES([DBUS], [dbus-1 >= 1.6]) +AC_SUBST([DBUS_CFLAGS]) +AC_SUBST([DBUS_LIBS]) +``` + +### Header Organization +```makefile +# Include paths +AM_CPPFLAGS = -I$(top_srcdir)/include \ + -I$(top_srcdir)/source/utils \ + -I$(top_srcdir)/source/bulkdata \ + $(DBUS_CFLAGS) +``` + +## Build Performance + +### Parallel Builds +- Support `make -j` +- Avoid circular dependencies +- Use order-only prerequisites when appropriate + +### Incremental Builds +- Proper dependency tracking +- Don't force full rebuilds unless necessary +- Use libtool for shared libraries + +## Testing Integration + +```makefile +# Test targets +check-local: + @echo "Running memory leak tests..." + @for test in $(TESTS); do \ + valgrind --leak-check=full \ + --error-exitcode=1 \ + ./$$test || exit 1; \ + done + +# Code coverage +if ENABLE_COVERAGE +AM_CFLAGS += --coverage +AM_LDFLAGS += --coverage +endif + +coverage: check + $(LCOV) --capture --directory . --output-file coverage.info + $(GENHTML) coverage.info --output-directory coverage +``` diff --git a/.github/instructions/c-embedded.instructions.md b/.github/instructions/c-embedded.instructions.md new file mode 100644 index 00000000..16b1eef5 --- /dev/null +++ b/.github/instructions/c-embedded.instructions.md @@ -0,0 +1,693 @@ +--- +applyTo: "**/*.c,**/*.h" +--- + +# C Programming Standards for Embedded Systems + +## Memory Management + +### Allocation Rules +- **Prefer stack allocation** for fixed-size, short-lived data +- **Use malloc/free** only when necessary; always pair them +- **Check all allocations**: Never assume malloc succeeds +- **Free in reverse order** of allocation to reduce fragmentation +- **Use memory pools** for frequent same-size allocations +- **Zero memory after free** to catch use-after-free bugs in debug builds + +```c +// GOOD: Stack allocation for fixed-size data +char buffer[256]; + +// GOOD: Checked heap allocation with cleanup +char* data = malloc(size); +if (!data) { + log_error("Failed to allocate %zu bytes", size); + return ERR_NO_MEMORY; +} +// ... use data ... +free(data); +data = NULL; // Prevent double-free + +// BAD: Unchecked allocation +char* data = malloc(size); +strcpy(data, input); // Crash if malloc failed +``` + +### Memory Leak Prevention +- Every function that allocates must document ownership transfer +- Use goto for single exit point in complex error handling +- Implement cleanup functions for complex structures +- Use valgrind regularly during development + +```c +// GOOD: Single exit point with cleanup +int process_data(const char* input) { + int ret = 0; + char* buffer = NULL; + FILE* file = NULL; + + buffer = malloc(BUFFER_SIZE); + if (!buffer) { + ret = ERR_NO_MEMORY; + goto cleanup; + } + + file = fopen(input, "r"); + if (!file) { + ret = ERR_FILE_OPEN; + goto cleanup; + } + + // ... processing ... + +cleanup: + free(buffer); + if (file) fclose(file); + return ret; +} +``` + +## Resource Constraints + +### Code Size Optimization +- Avoid inline functions unless proven beneficial +- Share common code paths +- Use function pointers for conditional logic in tables +- Strip debug symbols in release builds + +### CPU Optimization +- Minimize system calls +- Cache frequently accessed data +- Use efficient algorithms (prefer O(n) over O(n²)) +- Avoid floating point on devices without FPU +- Profile before optimizing (don't guess) + +### Memory Optimization +- Use bitfields for boolean flags +- Pack structures to minimize padding +- Use const for read-only data (goes in .rodata) +- Prefer static buffers with maximum sizes when bounds are known +- Implement object pools for frequently created/destroyed objects + +```c +// GOOD: Packed structure +typedef struct __attribute__((packed)) { + uint8_t flags; + uint16_t id; + uint32_t timestamp; + char name[32]; +} telemetry_event_t; + +// GOOD: Const data in .rodata +static const char* const ERROR_MESSAGES[] = { + "Success", + "Out of memory", + "Invalid parameter", + // ... +}; +``` + +## Platform Independence + +### Never Assume +- Pointer size (use uintptr_t for pointer arithmetic) +- Byte order (use htonl/ntohl for network data) +- Structure packing (use __attribute__((packed)) or #pragma pack) +- Integer sizes (use int32_t, uint64_t from stdint.h) +- Boolean type (use stdbool.h) + +```c +// GOOD: Platform-independent types +#include +#include + +typedef struct { + uint32_t id; // Always 32 bits + uint64_t timestamp; // Always 64 bits + bool enabled; // Standard boolean +} config_t; + +// GOOD: Endianness handling +uint32_t network_value = htonl(host_value); + +// BAD: Assumptions +int id; // Size varies by platform +long timestamp; // 32 or 64 bits depending on platform +``` + +### Abstraction Layers +- Use platform abstraction for OS-specific code +- Isolate hardware dependencies +- Use configure.ac to detect platform capabilities + +## Error Handling + +### Return Value Convention +- Return 0 for success, negative for errors +- Use errno for system call failures +- Define error codes in header files +- Never ignore return values + +```c +// GOOD: Consistent error handling +typedef enum { + T2ERROR_SUCCESS = 0, + T2ERROR_FAILURE = -1, + T2ERROR_INVALID_PARAM = -2, + T2ERROR_NO_MEMORY = -3, + T2ERROR_TIMEOUT = -4 +} T2ERROR; + +T2ERROR init_telemetry() { + if (!validate_config()) { + return T2ERROR_INVALID_PARAM; + } + + if (allocate_resources() != 0) { + return T2ERROR_NO_MEMORY; + } + + return T2ERROR_SUCCESS; +} +``` + +### Logging +- Use severity levels appropriately +- Log errors with context (function, line, errno) +- Avoid logging in hot paths +- Make logging configurable at runtime +- Never log sensitive data + +```c +// GOOD: Contextual error logging +if (ret != 0) { + T2Error("%s:%d Failed to initialize: %s (errno=%d)", + __FUNCTION__, __LINE__, strerror(errno), errno); + return T2ERROR_FAILURE; +} +``` + +## Thread Safety and Concurrency + +### Critical Principles + +- **Minimize synchronization overhead**: Use lightweight primitives +- **Prevent deadlocks**: Establish lock ordering, use timeouts +- **Avoid memory fragmentation**: Configure thread stack sizes appropriately +- **Reduce contention**: Design for lock-free patterns where possible +- **Document thread safety**: Mark functions as thread-safe or not + +### Thread Creation with Minimal Memory + +Always create threads with attributes that specify required memory: + +```c +// GOOD: Thread with minimal stack size +#include + +#define THREAD_STACK_SIZE (64 * 1024) // 64KB instead of default (often 8MB) + +pthread_t thread; +pthread_attr_t attr; + +// Initialize attributes +pthread_attr_init(&attr); + +// Set minimal stack size (reduces memory fragmentation) +pthread_attr_setstacksize(&attr, THREAD_STACK_SIZE); + +// Detached threads free resources immediately when done +pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); + +// Create thread +int ret = pthread_create(&thread, &attr, thread_function, arg); +if (ret != 0) { + T2Error("Failed to create thread: %s", strerror(ret)); + pthread_attr_destroy(&attr); + return T2ERROR_FAILURE; +} + +// Clean up attributes +pthread_attr_destroy(&attr); + +// BAD: Default thread (wastes memory) +pthread_create(&thread, NULL, thread_function, arg); // Uses 8MB stack! +``` + +### Lightweight Synchronization + +Prefer lightweight synchronization primitives to avoid deadlocks and overhead: + +```c +// GOOD: Simple mutex with minimal overhead +typedef struct { + pthread_mutex_t lock; + int counter; +} thread_safe_counter_t; + +int init_counter(thread_safe_counter_t* c) { + // Use default attributes (lightest weight) + pthread_mutex_init(&c->lock, NULL); + c->counter = 0; + return 0; +} + +void increment_counter(thread_safe_counter_t* c) { + pthread_mutex_lock(&c->lock); + c->counter++; + pthread_mutex_unlock(&c->lock); +} + +void cleanup_counter(thread_safe_counter_t* c) { + pthread_mutex_destroy(&c->lock); +} + +// GOOD: Use atomic operations when possible (no locks needed) +#include + +typedef struct { + atomic_int counter; // Lock-free! +} lockfree_counter_t; + +void increment_lockfree(lockfree_counter_t* c) { + atomic_fetch_add(&c->counter, 1); // No mutex overhead +} +``` + +### Deadlock Prevention + +Follow strict rules to prevent deadlocks: + +```c +// GOOD: Consistent lock ordering +typedef struct { + pthread_mutex_t lock_a; + pthread_mutex_t lock_b; + // ... data ... +} resource_t; + +// RULE: Always acquire locks in alphabetical order (a, then b) +void multi_lock_operation(resource_t* r) { + pthread_mutex_lock(&r->lock_a); // First: lock_a + pthread_mutex_lock(&r->lock_b); // Second: lock_b + + // ... critical section ... + + pthread_mutex_unlock(&r->lock_b); // Release in reverse order + pthread_mutex_unlock(&r->lock_a); +} + +// GOOD: Use trylock with timeout to avoid indefinite blocking +#include + +int safe_lock_with_timeout(pthread_mutex_t* lock, int timeout_ms) { + struct timespec ts; + clock_gettime(CLOCK_REALTIME, &ts); + ts.tv_sec += timeout_ms / 1000; + ts.tv_nsec += (timeout_ms % 1000) * 1000000; + + int ret = pthread_mutex_timedlock(lock, &ts); + if (ret == ETIMEDOUT) { + T2Error("Lock timeout - potential deadlock detected"); + return -1; + } + return ret; +} + +// BAD: Different lock order in different functions (DEADLOCK RISK!) +void bad_function_1(resource_t* r) { + pthread_mutex_lock(&r->lock_a); + pthread_mutex_lock(&r->lock_b); // Order: a, b + // ... +} + +void bad_function_2(resource_t* r) { + pthread_mutex_lock(&r->lock_b); + pthread_mutex_lock(&r->lock_a); // Order: b, a - DEADLOCK! + // ... +} +``` + +### Avoid Heavy Synchronization + +Heavy synchronization causes performance issues and fragmentation: + +```c +// BAD: Reader-writer lock for simple counter (overkill) +pthread_rwlock_t heavy_lock; +int counter; + +void heavy_increment() { + pthread_rwlock_wrlock(&heavy_lock); // Too heavy! + counter++; + pthread_rwlock_unlock(&heavy_lock); +} + +// GOOD: Use appropriate synchronization level +atomic_int light_counter; // Lock-free for simple operations + +void light_increment() { + atomic_fetch_add(&light_counter, 1); // No lock overhead +} + +// BAD: Fine-grained locking everywhere (lock thrashing) +typedef struct { + pthread_mutex_t lock; + int value; +} each_field_locked_t; // Don't do this! + +// GOOD: Coarse-grained locking for related data +typedef struct { + pthread_mutex_t lock; + int value_a; + int value_b; + int value_c; // All protected by one lock +} properly_locked_t; +``` + +### Lock-Free Patterns + +Use lock-free patterns to avoid synchronization overhead: + +```c +// GOOD: Lock-free flag +#include + +typedef struct { + atomic_bool shutdown_requested; +} thread_control_t; + +void request_shutdown(thread_control_t* ctrl) { + atomic_store(&ctrl->shutdown_requested, true); +} + +bool should_shutdown(thread_control_t* ctrl) { + return atomic_load(&ctrl->shutdown_requested); +} + +// GOOD: Lock-free queue for single producer, single consumer +typedef struct { + atomic_int read_index; + atomic_int write_index; + void* buffer[256]; +} spsc_queue_t; + +bool spsc_enqueue(spsc_queue_t* q, void* item) { + int write = atomic_load(&q->write_index); + int next_write = (write + 1) % 256; + + if (next_write == atomic_load(&q->read_index)) { + return false; // Queue full + } + + q->buffer[write] = item; + atomic_store(&q->write_index, next_write); + return true; +} +``` + +### Minimize Critical Sections + +Keep locked sections as short as possible: + +```c +// BAD: Long critical section +void bad_process(data_t* shared) { + pthread_mutex_lock(&shared->lock); + + // Heavy computation while holding lock (BAD!) + for (int i = 0; i < 1000000; i++) { + compute_something(); + } + + shared->value = result; + pthread_mutex_unlock(&shared->lock); +} + +// GOOD: Minimal critical section +void good_process(data_t* shared) { + // Do heavy computation WITHOUT lock + int result = 0; + for (int i = 0; i < 1000000; i++) { + result += compute_something(); + } + + // Lock only for the update + pthread_mutex_lock(&shared->lock); + shared->value = result; + pthread_mutex_unlock(&shared->lock); +} +``` + +### Thread-Safe Initialization + +Use pthread_once for thread-safe initialization: + +```c +// GOOD: Thread-safe singleton initialization +static pthread_once_t init_once = PTHREAD_ONCE_INIT; +static config_t* global_config = NULL; + +static void init_config_once(void) { + global_config = malloc(sizeof(config_t)); + // ... initialize config ... +} + +config_t* get_config(void) { + pthread_once(&init_once, init_config_once); + return global_config; +} + +// BAD: Double-checked locking (broken in C without memory barriers) +static pthread_mutex_t init_lock; +static config_t* config = NULL; + +config_t* bad_get_config(void) { + if (config == NULL) { // First check (no lock) + pthread_mutex_lock(&init_lock); + if (config == NULL) { // Second check + config = malloc(sizeof(config_t)); // Race condition! + } + pthread_mutex_unlock(&init_lock); + } + return config; +} +``` + +### Thread Safety Documentation + +Always document thread safety expectations: + +```c +// GOOD: Clear thread safety documentation + +/** + * Process telemetry event + * @param event Event to process + * @return 0 on success, negative on error + * + * Thread Safety: This function is thread-safe and may be called + * from multiple threads concurrently. + */ +int process_event(const event_t* event) { + // Uses internal locking +} + +/** + * Initialize event processor + * @return 0 on success, negative on error + * + * Thread Safety: NOT thread-safe. Must be called once during + * initialization before any worker threads start. + */ +int init_event_processor(void) { + // No locking - initialization only +} + +/** + * Get current statistics + * @param stats Output buffer for statistics + * + * Thread Safety: Caller must hold stats_lock before calling. + * Use get_stats_safe() for automatic locking. + */ +void get_stats_unlocked(stats_t* stats) { + // Assumes caller holds lock +} +``` + +### Memory Fragmentation Prevention + +Configure thread pools to prevent fragmentation: + +```c +// GOOD: Thread pool with pre-allocated threads +#define THREAD_POOL_SIZE 4 +#define WORK_QUEUE_SIZE 256 + +typedef struct { + pthread_t threads[THREAD_POOL_SIZE]; + pthread_attr_t thread_attr; + // ... work queue ... +} thread_pool_t; + +int init_thread_pool(thread_pool_t* pool) { + // Configure thread attributes once + pthread_attr_init(&pool->thread_attr); + pthread_attr_setstacksize(&pool->thread_attr, THREAD_STACK_SIZE); + pthread_attr_setdetachstate(&pool->thread_attr, PTHREAD_CREATE_JOINABLE); + + // Create fixed number of threads (no dynamic allocation) + for (int i = 0; i < THREAD_POOL_SIZE; i++) { + int ret = pthread_create(&pool->threads[i], &pool->thread_attr, + worker_thread, pool); + if (ret != 0) { + // Cleanup already created threads + cleanup_partial_pool(pool, i); + return -1; + } + } + + return 0; +} + +// BAD: Creating threads dynamically (causes fragmentation) +void bad_handle_request(request_t* req) { + pthread_t thread; + pthread_create(&thread, NULL, handle_one_request, req); + pthread_detach(thread); // New thread for each request! +} +``` + +### Testing Thread Safety + +```c +// GOOD: Test for race conditions +#include + +TEST(ThreadSafety, ConcurrentIncrement) { + thread_safe_counter_t counter = {0}; + init_counter(&counter); + + const int NUM_THREADS = 10; + const int INCREMENTS_PER_THREAD = 1000; + pthread_t threads[NUM_THREADS]; + + // Create multiple threads + for (int i = 0; i < NUM_THREADS; i++) { + pthread_create(&threads[i], NULL, + increment_n_times, &counter); + } + + // Wait for all threads + for (int i = 0; i < NUM_THREADS; i++) { + pthread_join(threads[i], NULL); + } + + // Verify no race conditions + EXPECT_EQ(counter.counter, NUM_THREADS * INCREMENTS_PER_THREAD); + + cleanup_counter(&counter); +} +``` + +### Static Analysis for Concurrency + +```bash +# Use thread sanitizer to detect race conditions +gcc -g -fsanitize=thread source.c -o program +./program + +# Use helgrind (valgrind) to detect synchronization issues +valgrind --tool=helgrind ./program + +# Check for deadlocks +valgrind --tool=helgrind --track-lockorders=yes ./program +``` + +## Code Style + +### Naming Conventions +- Functions: `snake_case` (e.g., `init_telemetry`) +- Types: `snake_case_t` (e.g., `telemetry_event_t`) +- Macros/Constants: `UPPER_SNAKE_CASE` (e.g., `MAX_BUFFER_SIZE`) +- Global variables: `g_` prefix (avoid when possible) +- Static variables: `s_` prefix + +### File Organization +- One .c file per module +- Corresponding .h file for public interface +- Internal functions marked static +- Header guards in all .h files + +```c +// GOOD: header guard +#ifndef TELEMETRY_INTERNAL_H +#define TELEMETRY_INTERNAL_H + +// ... declarations ... + +#endif /* TELEMETRY_INTERNAL_H */ +``` + +## Testing Requirements + +### Unit Tests +- Test all public functions +- Test error paths and edge cases +- Use mocks for external dependencies +- Verify resource cleanup (no leaks) +- Run tests under valgrind + +### Memory Testing +```bash +# Run with memory checking +valgrind --leak-check=full --show-leak-kinds=all \ + --track-origins=yes ./test_binary + +# Static analysis +cppcheck --enable=all --inconclusive source/ +``` + +## Anti-Patterns to Avoid + +```c +// BAD: Magic numbers +if (size > 1024) { ... } + +// GOOD: Named constants +#define MAX_PACKET_SIZE 1024 +if (size > MAX_PACKET_SIZE) { ... } + +// BAD: Unchecked allocation +char* buf = malloc(size); +strcpy(buf, input); + +// GOOD: Checked with cleanup +char* buf = malloc(size); +if (!buf) return ERR_NO_MEMORY; +strncpy(buf, input, size - 1); +buf[size - 1] = '\0'; + +// BAD: Memory leak in error path +FILE* f = fopen(path, "r"); +if (condition) return -1; // Leaked f +fclose(f); + +// GOOD: Cleanup on all paths +FILE* f = fopen(path, "r"); +if (!f) return -1; +if (condition) { + fclose(f); + return -1; +} +fclose(f); +return 0; +``` + +## References + +- Project follows RDK coding standards +- See telemetry2_0.h for API documentation +- Review existing code in source/ for patterns +- Check test/ directory for testing examples diff --git a/.github/instructions/cpp-testing.instructions.md b/.github/instructions/cpp-testing.instructions.md new file mode 100644 index 00000000..a9ba5d95 --- /dev/null +++ b/.github/instructions/cpp-testing.instructions.md @@ -0,0 +1,183 @@ +--- +applyTo: "source/test/**/*.cpp,source/test/**/*.h" +--- + +# C++ Testing Standards (Google Test) + +## Test Framework + +Use Google Test (gtest) and Google Mock (gmock) for all C++ test code. + +## Test Organization + +### File Structure +- One test file per source file: `foo.c` → `test/FooTest.cpp` +- Test fixtures for complex setups +- Mocks in separate files when reusable + +```cpp +// GOOD: Test file structure +// filepath: source/test/utils/UtilsTest.cpp + +extern "C" { +#include +#include +} + +#include +#include + +class UtilsTest : public ::testing::Test { +protected: + void SetUp() override { + // Initialize test resources + } + + void TearDown() override { + // Clean up test resources + } +}; + +TEST_F(UtilsTest, VectorCreateAndDestroy) { + Vector* vec = Vector_Create(); + ASSERT_NE(nullptr, vec); + + Vector_Destroy(vec, nullptr); + // No memory leaks! +} +``` + +## Testing Patterns + +### Test C Code from C++ +- Wrap C headers in `extern "C"` blocks +- Use RAII in tests for automatic cleanup +- Mock C functions using gmock when needed + +```cpp +extern "C" { +#include "telemetry2_0.h" +} + +#include + +class TelemetryTest : public ::testing::Test { +protected: + void SetUp() override { + // Initialize telemetry + t2_init("test_component"); + } + + void TearDown() override { + // Clean up + t2_uninit(); + } +}; + +TEST_F(TelemetryTest, EventSubmission) { + T2ERROR result = t2_event_s("TEST_EVENT", "test_value"); + EXPECT_EQ(T2ERROR_SUCCESS, result); +} +``` + +### Memory Leak Testing +- All tests must pass valgrind +- Use RAII wrappers for C resources +- Verify cleanup in TearDown + +```cpp +// GOOD: RAII wrapper for C resource +class FileHandle { + FILE* file_; +public: + explicit FileHandle(const char* path, const char* mode) + : file_(fopen(path, mode)) {} + + ~FileHandle() { + if (file_) fclose(file_); + } + + FILE* get() const { return file_; } + bool valid() const { return file_ != nullptr; } +}; + +TEST(FileTest, ReadConfig) { + FileHandle file("/tmp/config.json", "r"); + ASSERT_TRUE(file.valid()); + // file automatically closed when test exits +} +``` + +### Mocking External Dependencies + +```cpp +// GOOD: Mock for system calls +class MockScheduler { +public: + MOCK_METHOD(T2ERROR, registerProfile, + (const char*, unsigned int, unsigned int, + bool, bool, bool, unsigned int, char*)); + MOCK_METHOD(T2ERROR, unregisterProfile, (const char*)); +}; + +TEST(SchedulerTest, ProfileRegistration) { + MockScheduler mock; + + EXPECT_CALL(mock, registerProfile( + "profile1", 300, 0, false, true, false, 0, nullptr)) + .WillOnce(testing::Return(T2ERROR_SUCCESS)); + + T2ERROR result = mock.registerProfile( + "profile1", 300, 0, false, true, false, 0, nullptr); + + EXPECT_EQ(T2ERROR_SUCCESS, result); +} +``` + +## Test Quality Standards + +### Coverage Requirements +- All public functions must have tests +- Test both success and failure paths +- Test boundary conditions +- Test error handling + +### Test Naming +```cpp +// Pattern: TEST(ComponentName, BehaviorBeingTested) + +TEST(Vector, CreateReturnsNonNull) { ... } +TEST(Vector, DestroyHandlesNull) { ... } +TEST(Vector, PushIncrementsSize) { ... } +TEST(Utils, ParseConfigInvalidJson) { ... } +``` + +### Assertions +- Use `ASSERT_*` when test can't continue after failure +- Use `EXPECT_*` when subsequent checks are still valuable +- Provide helpful failure messages + +```cpp +// GOOD: Informative assertions +ASSERT_NE(nullptr, ptr) << "Failed to allocate " << size << " bytes"; +EXPECT_EQ(expected, actual) << "Mismatch at index " << i; +EXPECT_TRUE(condition) << "Context: " << debug_info; +``` + +## Running Tests + +### Build Tests +```bash +./configure --enable-gtest +make check +``` + +### Memory Checking +```bash +valgrind --leak-check=full --show-leak-kinds=all \ + ./source/test/telemetry_gtest_report +``` + +### Test Output +- Use `GTEST_OUTPUT=xml:results.xml` for CI integration +- Check return code: 0 = all passed diff --git a/.github/instructions/shell-scripts.instructions.md b/.github/instructions/shell-scripts.instructions.md new file mode 100644 index 00000000..a25a2c69 --- /dev/null +++ b/.github/instructions/shell-scripts.instructions.md @@ -0,0 +1,179 @@ +--- +applyTo: "**/*.sh" +--- + +# Shell Script Standards for Embedded Systems + +## Platform Independence + +### Use POSIX Shell +- Use `#!/bin/sh` not `#!/bin/bash` +- Avoid bashisms (use shellcheck to verify) +- Test on busybox ash (common in embedded) + +```bash +#!/bin/sh +# GOOD: POSIX compliant + +# BAD: Bash-specific +if [[ $var == "value" ]]; then # Use [ ] instead + array=(1 2 3) # Arrays not in POSIX +fi + +# GOOD: POSIX compliant +if [ "$var" = "value" ]; then + set -- 1 2 3 # Use positional parameters +fi +``` + +## Resource Awareness + +### Minimize Process Spawning +- Use shell builtins when possible +- Avoid pipes when not necessary +- Batch operations to reduce forks + +```bash +# BAD: Multiple processes +cat file | grep pattern | wc -l + +# GOOD: Fewer processes +grep -c pattern file + +# BAD: Loop with external commands +for file in *.txt; do + cat "$file" >> output +done + +# GOOD: Single cat invocation +cat *.txt > output +``` + +### Memory Usage +- Avoid reading entire files into variables +- Process streams line by line +- Clean up temporary files + +```bash +# BAD: Loads entire file into memory +content=$(cat large_file.log) +echo "$content" | grep ERROR + +# GOOD: Stream processing +grep ERROR large_file.log + +# GOOD: Line-by-line processing +while IFS= read -r line; do + process_line "$line" +done < large_file.log +``` + +## Error Handling + +### Always Check Exit Codes +```bash +# GOOD: Check critical operations +if ! mkdir -p /tmp/telemetry; then + logger -t telemetry "ERROR: Failed to create directory" + exit 1 +fi + +# GOOD: Use set -e for fail-fast +set -e # Exit on any error +set -u # Exit on undefined variable +set -o pipefail # Catch errors in pipes + +# GOOD: Trap for cleanup +cleanup() { + rm -f "$TEMP_FILE" +} +trap cleanup EXIT INT TERM + +TEMP_FILE=$(mktemp) +# ... use temp file ... +# cleanup happens automatically +``` + +## Script Quality + +### Defensive Programming +```bash +# GOOD: Quote all variables +rm -f "$file_path" # Not: rm -f $file_path + +# GOOD: Use -- to separate options from arguments +grep -r -- "$pattern" "$directory" + +# GOOD: Check variable is set +: "${CONFIG_FILE:?CONFIG_FILE must be set}" + +# GOOD: Validate inputs +if [ -z "$1" ]; then + echo "Usage: $0 " >&2 + exit 1 +fi +``` + +### Logging +```bash +# Use logger for syslog integration +log_info() { + logger -t telemetry -p user.info "$*" +} + +log_error() { + logger -t telemetry -p user.error "$*" + echo "ERROR: $*" >&2 +} + +# Usage +log_info "Starting telemetry collection" +if ! start_service; then + log_error "Failed to start service" + exit 1 +fi +``` + +## Testing Scripts + +### Use shellcheck +```bash +# Run shellcheck on all scripts +shellcheck script.sh + +# In CI +find . -name "*.sh" -exec shellcheck {} + +``` + +### Test on Target Platform +- Test on actual embedded device or emulator +- Verify with busybox tools +- Check resource usage (memory, CPU) + +## Anti-Patterns + +```bash +# BAD: Unquoted variables +for file in $FILES; do # Word splitting! + +# GOOD: Quoted +for file in "$FILES"; do + +# BAD: Parsing ls output +for file in $(ls *.txt); do + +# GOOD: Use glob +for file in *.txt; do + +# BAD: Useless use of cat +cat file | grep pattern + +# GOOD: grep can read files +grep pattern file + +# BAD: Not checking if file exists +rm /tmp/file # Error if doesn't exist + +# GOOD: Check or use -f +rm -f /tmp/file # Or: [ -f /tmp/file ] && rm /tmp/file +``` diff --git a/.github/skills/memory-safety-analyzer/SKILL.md b/.github/skills/memory-safety-analyzer/SKILL.md new file mode 100644 index 00000000..5d2d9b29 --- /dev/null +++ b/.github/skills/memory-safety-analyzer/SKILL.md @@ -0,0 +1,227 @@ +--- +name: memory-safety-analyzer +description: Analyze C/C++ code for memory safety issues including leaks, use-after-free, buffer overflows, and provide fixes. Use when reviewing memory management, debugging crashes, or improving code safety. +--- + +# Memory Safety Analysis for Embedded C + +## Purpose + +Systematically analyze C/C++ code for memory safety issues that can cause crashes, security vulnerabilities, or resource exhaustion in embedded systems. + +## Usage + +Invoke this skill when: +- Reviewing new code with dynamic memory allocation +- Debugging memory-related crashes +- Analyzing legacy code for safety issues +- Preparing code for production deployment +- Investigating memory leaks or fragmentation + +## Analysis Process + +### Step 1: Identify All Allocations + +Search the code for: +- `malloc`, `calloc`, `realloc` +- `strdup`, `strndup` +- `fopen`, `open` +- `pthread_create`, `pthread_mutex_init` +- Custom allocation functions + +For each allocation, verify: +1. Return value is checked +2. Corresponding free/close exists +3. Error paths also free resources +4. No double-free possible + +### Step 2: Check Pointer Lifetimes + +For each pointer variable: +- When is it assigned? +- When is it freed? +- Can it be used after free? +- Can it outlive the data it points to? +- Is it NULL-initialized? +- Is it NULL-checked before use? + +### Step 3: Analyze Error Paths + +For each error return: +- Are all resources freed? +- Is cleanup done in correct order? +- Are error codes accurate? +- Is logging appropriate? + +### Step 4: Review Buffer Operations + +For string and memory operations: +- `strcpy` → should be `strncpy` with size check +- `sprintf` → should be `snprintf` with size +- `gets` → never use (remove immediately) +- `strcat` → verify buffer size +- `memcpy` → verify no overlap, validate size + +### Step 5: Static Analysis + +Run tools: +```bash +# Cppcheck +cppcheck --enable=all --inconclusive file.c + +# Clang static analyzer +scan-build make + +# Compiler warnings +gcc -Wall -Wextra -Werror file.c +``` + +### Step 6: Dynamic Analysis + +Run valgrind: +```bash +valgrind --leak-check=full \ + --show-leak-kinds=all \ + --track-origins=yes \ + --verbose \ + ./program +``` + +## Common Issues and Fixes + +### Issue: Unchecked malloc + +```c +// PROBLEM +char* buffer = malloc(size); +strcpy(buffer, input); // Crash if malloc failed + +// FIX +char* buffer = malloc(size); +if (!buffer) { + log_error("Failed to allocate %zu bytes", size); + return ERR_NO_MEMORY; +} +strncpy(buffer, input, size - 1); +buffer[size - 1] = '\0'; +``` + +### Issue: Memory leak on error + +```c +// PROBLEM +int process() { + char* buf = malloc(1024); + FILE* f = fopen("file.txt", "r"); + + if (!f) return -1; // Leaked buf + + // ... process ... + + free(buf); + fclose(f); + return 0; +} + +// FIX: Single exit with cleanup +int process() { + int ret = 0; + char* buf = NULL; + FILE* f = NULL; + + buf = malloc(1024); + if (!buf) { + ret = ERR_NO_MEMORY; + goto cleanup; + } + + f = fopen("file.txt", "r"); + if (!f) { + ret = ERR_FILE_OPEN; + goto cleanup; + } + + // ... process ... + +cleanup: + free(buf); + if (f) fclose(f); + return ret; +} +``` + +### Issue: Use after free + +```c +// PROBLEM +free(ptr); +if (ptr->field > 0) { ... } // Use after free! + +// FIX +int value = ptr->field; +free(ptr); +ptr = NULL; +if (value > 0) { ... } +``` + +### Issue: Double free + +```c +// PROBLEM +free(ptr); +// ... later ... +free(ptr); // Double free! + +// FIX: NULL after free +free(ptr); +ptr = NULL; +// ... later ... +free(ptr); // Safe: free(NULL) is a no-op +``` + +### Issue: Buffer overflow + +```c +// PROBLEM +char buffer[100]; +strcpy(buffer, user_input); // Overflow if input > 99 chars + +// FIX +char buffer[100]; +strncpy(buffer, user_input, sizeof(buffer) - 1); +buffer[sizeof(buffer) - 1] = '\0'; +``` + +## Output Format + +Provide findings as: + +``` +## Memory Safety Analysis + +### Critical Issues (must fix) +1. [file.c:123] Unchecked malloc - potential NULL dereference +2. [file.c:456] Memory leak on error path - buffer not freed +3. [file.c:789] Use after free - ptr used after free() + +### Warnings (should fix) +1. [file.c:234] strcpy used - prefer strncpy +2. [file.c:567] Missing NULL check before pointer use + +### Recommendations +1. Add cleanup label for resource management +2. Use RAII wrapper in tests +3. Run valgrind in CI pipeline + +### Suggested Fixes +[Provide specific code changes for each issue] +``` + +## Verification + +After fixes: +1. All static analysis warnings resolved +2. Valgrind shows no leaks +3. All tests pass +4. Code review by human +5. Memory footprint measured and acceptable diff --git a/.github/skills/platform-portability-checker/SKILL.md b/.github/skills/platform-portability-checker/SKILL.md new file mode 100644 index 00000000..354fce3c --- /dev/null +++ b/.github/skills/platform-portability-checker/SKILL.md @@ -0,0 +1,318 @@ +--- +name: platform-portability-checker +description: Verify C/C++ code is platform-independent and portable across embedded platforms. Use when reviewing code for cross-platform deployment or preparing for new hardware targets. +--- + +# Platform Portability Checker + +## Purpose + +Ensure C/C++ code is portable across different embedded platforms, architectures, and operating systems without modification. + +## When to Use + +- Reviewing new code before merge +- Porting to new hardware platform +- Preparing release for multiple architectures +- Investigating platform-specific bugs +- Refactoring legacy platform-specific code + +## Portability Checklist + +### 1. Integer Types + +**Check for**: Use of `int`, `long`, `short` without fixed sizes + +```c +// PROBLEM: Size varies by platform +int counter; // 16, 32, or 64 bits? +long timestamp; // 32 or 64 bits? +short flag; // 16 bits on most, but not guaranteed + +// FIX: Use stdint.h types +#include + +uint32_t counter; // Always 32 bits +uint64_t timestamp; // Always 64 bits +uint16_t flag; // Always 16 bits + +// For size_t operations +size_t length; // Pointer-sized unsigned +ssize_t result; // Pointer-sized signed +``` + +### 2. Pointer Assumptions + +**Check for**: Pointer arithmetic, casting, size assumptions + +```c +// PROBLEM: Assumes pointer == long +long ptr_value = (long)ptr; // Fails on 64-bit with 32-bit long + +// FIX: Use uintptr_t +#include +uintptr_t ptr_value = (uintptr_t)ptr; + +// PROBLEM: Pointer used as integer +if (ptr & 0x1) { ... } // What size is ptr? + +// FIX: Be explicit +if ((uintptr_t)ptr & 0x1) { ... } +``` + +### 3. Endianness + +**Check for**: Multi-byte values sent over network or stored to disk + +```c +// PROBLEM: Host byte order assumed +uint32_t value = 0x12345678; +fwrite(&value, 4, 1, file); // Different on LE vs BE + +// FIX: Explicit byte order +#include // For htonl, ntohl + +uint32_t host_value = 0x12345678; +uint32_t network_value = htonl(host_value); +fwrite(&network_value, 4, 1, file); + +// For reading +uint32_t network_value; +fread(&network_value, 4, 1, file); +uint32_t host_value = ntohl(network_value); +``` + +### 4. Structure Packing + +**Check for**: Structures sent over network or saved to disk + +```c +// PROBLEM: Padding varies by platform +struct { + uint8_t type; + uint32_t value; // Padding before this? + uint16_t flags; // Padding before this? +} data; + +// FIX: Explicit packing +struct __attribute__((packed)) { + uint8_t type; + uint32_t value; + uint16_t flags; +} data; + +// Or control padding explicitly +struct { + uint8_t type; + uint8_t padding[3]; // Explicit padding + uint32_t value; + uint16_t flags; + uint16_t padding2; +} data; +``` + +### 5. Boolean Type + +**Check for**: Using int/char for boolean + +```c +// PROBLEM: Non-standard boolean +int flag; // Really 3 states: 0, 1, other +char enabled; // Also used for booleans + +// FIX: Use stdbool.h +#include + +bool flag; +bool enabled; + +if (flag) { ... } // Clear intent +``` + +### 6. Character Sets + +**Check for**: Assumptions about ASCII or character encoding + +```c +// PROBLEM: Assumes ASCII +if (ch >= 'A' && ch <= 'Z') { + ch = ch + 32; // Convert to lowercase? +} + +// FIX: Use standard functions +#include + +if (isupper(ch)) { + ch = tolower(ch); +} +``` + +### 7. File Paths + +**Check for**: Hard-coded path separators + +```c +// PROBLEM: Unix-specific +const char* path = "/tmp/telemetry/data.log"; + +// FIX: Use platform-agnostic approach +#ifdef _WIN32 + #define PATH_SEP "\\" + const char* tmp_dir = getenv("TEMP"); +#else + #define PATH_SEP "/" + const char* tmp_dir = "/tmp"; +#endif + +char path[256]; +snprintf(path, sizeof(path), "%s%stelemetry%sdata.log", + tmp_dir, PATH_SEP, PATH_SEP); +``` + +### 8. System Calls + +**Check for**: Platform-specific syscalls + +```c +// PROBLEM: Linux-specific +#include +int fd = epoll_create(10); + +// FIX: Abstraction layer +// In platform.h +#if defined(__linux__) + #include "platform_linux.h" +#elif defined(__APPLE__) + #include "platform_darwin.h" +#else + #error "Unsupported platform" +#endif + +// Each platform provides same interface +event_loop_t* create_event_loop(void); +``` + +### 9. Compiler Extensions + +**Check for**: GCC/Clang specific features + +```c +// PROBLEM: GCC-specific +typeof(x) y = x; +int array[0]; // Zero-length array + +// FIX: Use C11 standard features +__auto_type y = x; // C11 + +// Or avoid non-standard features +// Define proper types instead +``` + +### 10. Include Paths + +**Check for**: Platform-specific headers + +```c +// PROBLEM: Assumes Linux headers +#include + +// FIX: Use standard headers or configure check +#ifdef HAVE_LINUX_LIMITS_H + #include +#else + #include +#endif + +// Or use autoconf to detect +// In configure.ac: +// AC_CHECK_HEADERS([linux/limits.h limits.h]) +``` + +## Build System Integration + +### configure.ac checks + +```autoconf +# Check for required features +AC_C_BIGENDIAN +AC_CHECK_SIZEOF([int]) +AC_CHECK_SIZEOF([long]) +AC_CHECK_SIZEOF([void *]) + +# Check for headers +AC_CHECK_HEADERS([stdint.h stdbool.h endian.h]) + +# Check for functions +AC_CHECK_FUNCS([htonl ntohl]) + +# Platform-specific code +case "$host" in + *-linux*) + AC_DEFINE([PLATFORM_LINUX], [1]) + ;; + arm*|*-arm*) + AC_DEFINE([PLATFORM_ARM], [1]) + ;; +esac +``` + +## Testing + +### Cross-Compilation Test + +```bash +# Test building for different architectures +./configure --host=arm-linux-gnueabihf +make clean && make + +./configure --host=x86_64-linux-gnu +make clean && make + +./configure --host=mips-linux-gnu +make clean && make +``` + +### Endianness Test + +```c +// Test endianness handling +uint32_t value = 0x12345678; +uint32_t network = htonl(value); +uint32_t restored = ntohl(network); +assert(value == restored); + +// Verify structure packing +assert(sizeof(packed_struct_t) == EXPECTED_SIZE); +``` + +## Output Format + +``` +## Platform Portability Analysis + +### Critical Issues +1. [file.c:123] Using `long` for timestamp - not fixed width +2. [file.c:456] Writing struct directly to network - endianness issue +3. [file.c:789] Assuming 32-bit pointers + +### Warnings +1. [file.c:234] Using int for boolean - prefer stdbool.h +2. [file.c:567] Hard-coded Unix path separator + +### Recommendations +1. Add configure checks for required headers +2. Create platform abstraction layer +3. Test build on multiple architectures + +### Suggested Fixes +[Specific code changes for each issue] +``` + +## Verification + +- Code compiles on target platforms +- Tests pass on all platforms +- Static analysis clean +- No endianness issues +- No alignment issues +- Structure sizes verified diff --git a/.github/skills/quality-checker/README.md b/.github/skills/quality-checker/README.md new file mode 100644 index 00000000..ce3c6b7a --- /dev/null +++ b/.github/skills/quality-checker/README.md @@ -0,0 +1,72 @@ +# Quality Checker Skill + +Run comprehensive quality checks in the standard test container through chat interface. + +## Quick Start + +Simply ask Copilot to run quality checks in natural language: + +```text +Run quality checks +``` + +```text +Check memory safety +``` + +```text +Run static analysis on source/bulkdata +``` + +## What Gets Checked + +1. **Static Analysis**: cppcheck + shellcheck +2. **Memory Safety**: valgrind leak detection +3. **Thread Safety**: helgrind race/deadlock detection +4. **Build Verification**: strict warnings compilation + +## Environment + +Runs in the same container as CI/CD: + +- Image: `ghcr.io/rdkcentral/docker-device-mgt-service-test/native-platform:latest` +- All tools pre-installed +- Consistent with automated tests + +## Example Invocations + +| What to say | What it does | +| ----------- | ------------ | +| "Run quality checks" | Full suite, summary report | +| "Quick static analysis" | cppcheck + shellcheck only | +| "Check for memory leaks" | valgrind on test binaries | +| "Verify build with strict warnings" | Build with -Werror | +| "Run all checks on source/utils" | Full suite, scoped to utils | + +## Typical Workflow + +1. **Before committing**: "Run static analysis" +2. **Before push**: "Run quality checks" +3. **Debugging crash**: "Check memory safety" +4. **Reviewing PR**: "Run all checks" + +## Output + +You'll receive: + +- Summary of issues found +- Critical problems highlighted +- Links to detailed reports +- Recommendations for fixes + +## Prerequisites + +- Docker installed and running +- Access to GitHub Container Registry (automatic in CI/CD, may need login locally) + +## Tips + +- Start with static analysis (fastest) +- Run memory checks after static analysis passes +- Scope checks to changed files for speed +- Full suite before pushing to develop branch diff --git a/.github/skills/quality-checker/SKILL.md b/.github/skills/quality-checker/SKILL.md new file mode 100644 index 00000000..7abc3b8c --- /dev/null +++ b/.github/skills/quality-checker/SKILL.md @@ -0,0 +1,325 @@ +--- +name: quality-checker +description: Run comprehensive quality checks (static analysis, memory safety, thread safety, build verification) in the standard test container. Use when validating code changes or debugging before committing. +--- + +# Container-Based Quality Checker + +## Purpose + +Execute comprehensive quality checks on the codebase using the same containerized environment as CI/CD pipelines. Ensures consistency between local development and automated testing. + +## Usage + +Invoke this skill when: +- Validating changes before committing +- Debugging build or test failures +- Running quality checks locally +- Verifying memory safety of new code +- Checking for thread safety issues +- Performing static analysis + +You can run all checks or select specific ones based on your needs. + +## What It Does + +This skill runs quality checks inside the official test container (`ghcr.io/rdkcentral/docker-device-mgt-service-test/native-platform:latest`), which includes: +- Build tools (gcc, autotools, make) +- Static analysis tools (cppcheck, shellcheck) +- Memory analysis tools (valgrind) +- Thread analysis tools (helgrind) +- Google Test/Mock frameworks + +## Available Checks + +### 1. Static Analysis +- **cppcheck**: Comprehensive C/C++ static code analyzer +- **shellcheck**: Shell script linter +- **Output**: XML report with findings + +### 2. Memory Safety (Valgrind) +- **Memory leak detection**: Finds unreleased allocations +- **Use-after-free detection**: Catches dangling pointer usage +- **Invalid memory access**: Buffer overflows, uninitialized reads +- **Output**: XML and log files per test binary + +### 3. Thread Safety (Helgrind) +- **Race condition detection**: Finds unsynchronized shared memory access +- **Deadlock detection**: Identifies lock ordering issues +- **Lock usage verification**: Validates proper synchronization +- **Output**: XML and log files per test binary + +### 4. Build Verification +- **Strict compilation**: Builds with `-Wall -Wextra -Werror` +- **Test build**: Verifies tests compile +- **Binary analysis**: Reports size and dependencies +- **Output**: Build artifacts and size report + +## Execution Process + +### Step 1: Setup Container Environment + +Pull the latest test container: +```bash +docker pull ghcr.io/rdkcentral/docker-device-mgt-service-test/native-platform:latest +``` + +Start container with workspace mounted: +```bash +docker run -d --name native-platform \ + -v /path/to/workspace:/mnt/workspace \ + ghcr.io/rdkcentral/docker-device-mgt-service-test/native-platform:latest +``` + +### Step 2: Run Selected Checks + +Execute the requested quality checks inside the container: + +**Static Analysis:** +```bash +docker exec -i native-platform /bin/bash -c " + cd /mnt/workspace && \ + cppcheck --enable=all \ + --inconclusive \ + --suppress=missingIncludeSystem \ + --suppress=unmatchedSuppression \ + --error-exitcode=0 \ + --xml \ + --xml-version=2 \ + source/ 2> cppcheck-report.xml +" +``` + +**Shell Script Checks:** +```bash +docker exec -i native-platform /bin/bash -c " + cd /mnt/workspace && \ + find . -name '*.sh' -type f -exec shellcheck {} + +" +``` + +**Memory Safety:** +```bash +docker exec -i native-platform /bin/bash -c " + cd /mnt/workspace && \ + autoreconf -fi && \ + ./configure --enable-gtest && \ + make -j\$(nproc) && \ + find source/test -type f -executable -name '*test*' | while read test_bin; do + valgrind --leak-check=full \ + --show-leak-kinds=all \ + --track-origins=yes \ + --xml=yes \ + --xml-file=\"valgrind-\$(basename \$test_bin).xml\" \ + \"\$test_bin\" 2>&1 | tee \"valgrind-\$(basename \$test_bin).log\" + done +" +``` + +**Thread Safety:** +```bash +docker exec -i native-platform /bin/bash -c " + cd /mnt/workspace && \ + find source/test -type f -executable -name '*test*' | while read test_bin; do + valgrind --tool=helgrind \ + --track-lockorders=yes \ + --xml=yes \ + --xml-file=\"helgrind-\$(basename \$test_bin).xml\" \ + \"\$test_bin\" 2>&1 | tee \"helgrind-\$(basename \$test_bin).log\" + done +" +``` + +**Build Verification:** +```bash +docker exec -i native-platform /bin/bash -c " + cd /mnt/workspace && \ + autoreconf -fi && \ + ./configure --enable-gtest CFLAGS='-Wall -Wextra -Werror' && \ + make -j\$(nproc) && \ + if [ -f 'telemetry2_0' ]; then + ls -lh telemetry2_0 + file telemetry2_0 + size telemetry2_0 + fi +" +``` + +### Step 3: Report Results + +Parse and summarize results for the user: +- Number of issues found by category +- Critical issues requiring immediate attention +- Warnings that should be addressed +- Memory leaks with stack traces +- Race conditions or deadlock risks +- Build errors or warnings + +### Step 4: Cleanup + +Stop and remove the container: +```bash +docker stop native-platform +docker rm native-platform +``` + +## Interpreting Results + +### Static Analysis (cppcheck) +- **error**: Critical issues that must be fixed +- **warning**: Potential problems to review +- **style**: Code style improvements +- **performance**: Optimization opportunities + +### Memory Safety (Valgrind) +- **definitely lost**: Memory leaks requiring fixes +- **indirectly lost**: Leaks from lost parent structures +- **possibly lost**: Potential leaks to investigate +- **still reachable**: Memory held at exit (usually OK) +- **Invalid read/write**: Buffer overflow (CRITICAL) +- **Use of uninitialized value**: Must initialize before use + +### Thread Safety (Helgrind) +- **Possible data race**: Unsynchronized access to shared data +- **Lock order violation**: Potential deadlock scenario +- **Unlocking unlocked lock**: Synchronization bug +- **Thread still holds locks**: Resource leak + +### Build Verification +- **Compilation errors**: Must fix before proceeding +- **Warnings**: Review and fix (builds with -Werror) +- **Binary size**: Monitor for embedded constraints + +## User Interaction + +When invoked, ask the user: + +1. **Which checks to run?** + - All checks (comprehensive) + - Static analysis only (fast) + - Memory safety only + - Thread safety only + - Build verification only + - Custom combination + +2. **Scope:** + - Full codebase + - Specific directories + - Recently changed files + +3. **Report detail:** + - Summary only (counts and critical issues) + - Detailed (all findings) + - Full raw output + +## Example Invocations + +**User**: "Run quality checks" +- Default: Run all checks on full codebase, provide summary + +**User**: "Check memory safety" +- Run only valgrind checks, detailed report + +**User**: "Quick static analysis" +- Run cppcheck and shellcheck, summary only + +**User**: "Verify my changes build" +- Run build verification with strict warnings + +**User**: "Full analysis on source/bulkdata" +- Run all checks scoped to bulkdata directory + +## Best Practices + +1. **Run before committing**: Catch issues early +2. **Start with static analysis**: Fastest feedback +3. **Run memory checks on test binaries**: Most effective +4. **Review thread safety for concurrent code**: Essential for multi-threaded components +5. **Monitor binary size**: Important for embedded targets + +## Integration with Development Workflow + +1. **Pre-commit**: Quick static analysis +2. **Pre-push**: Full quality check suite +3. **Debugging**: Targeted memory/thread analysis +4. **Code review**: Validate reviewer feedback +5. **Refactoring**: Ensure no regressions + +## Advantages Over Manual Testing + +- **Consistency**: Same environment as CI/CD +- **Completeness**: All tools in one command +- **Reproducibility**: Container ensures identical results +- **Efficiency**: No local tool installation needed +- **Confidence**: Pass locally = pass in CI + +## Output Files Generated + +- `cppcheck-report.xml`: Static analysis findings +- `valgrind-.xml`: Memory issues per test +- `valgrind-.log`: Detailed memory logs +- `helgrind-.xml`: Thread safety issues per test +- `helgrind-.log`: Detailed concurrency logs + +These files can be uploaded as artifacts or reviewed locally. + +## Limitations + +- Requires Docker with GitHub Container Registry access +- Container pulls can be slow on first run (cached afterward) +- Full suite can take several minutes depending on codebase size +- Valgrind slows execution significantly (expected) + +## Tips for Faster Execution + +1. Use cached container images (don't pull every time) +2. Run static analysis first (fastest) +3. Scope checks to changed directories +4. Run memory/thread checks only on affected tests +5. Use parallel execution where possible + +## Skill Execution Logic + +When user invokes this skill: + +1. **Authenticate with GitHub Container Registry** + - Use github.actor and GITHUB_TOKEN if available + - Otherwise prompt for credentials or skip private registries + +2. **Pull container image** + - Check if image exists locally + - Pull only if needed or if --force specified + +3. **Start container** + - Mount workspace at /mnt/workspace + - Use unique container name (quality-checker-) + - Run in detached mode + +4. **Execute requested checks** + - Run checks in sequence + - Capture output + - Continue on errors (collect all findings) + +5. **Collect results** + - Copy result files from container + - Parse XML/log outputs + - Categorize findings + +6. **Report to user** + - Summary count + - Critical issues highlighted + - Link to detailed reports + - Next steps recommendations + +7. **Cleanup** + - Stop container + - Remove container + - Optional: clean up result files + +## Error Handling + +- **Container pull fails**: Report error, suggest manual pull +- **Container start fails**: Check Docker daemon, ports, permissions +- **Build fails**: Report build errors, stop further checks +- **Tools missing**: Verify container version, report missing tools +- **Out of memory**: Suggest increasing Docker memory limit diff --git a/.github/skills/technical-documentation-writer/SKILL.md b/.github/skills/technical-documentation-writer/SKILL.md new file mode 100644 index 00000000..b66ff3af --- /dev/null +++ b/.github/skills/technical-documentation-writer/SKILL.md @@ -0,0 +1,714 @@ +--- +name: technical-documentation-writer +description: Create and maintain comprehensive technical documentation for embedded systems projects. Use for architecture docs, API references, developer guides, and system documentation following best practices. +--- + +# Technical Documentation Writer for Embedded Systems + +## Purpose + +Create clear, comprehensive, and maintainable technical documentation for embedded C/C++ projects, with focus on architecture, APIs, threading models, memory management, and platform integration. + +## Usage + +Invoke this skill when: +- Documenting new features or components +- Creating system architecture documentation +- Writing API reference documentation +- Documenting threading and synchronization models +- Creating developer onboarding guides +- Documenting debugging procedures +- Writing integration guides for platform vendors + +## Documentation Structure + +### Directory Layout + +``` +project/ +├── README.md # Project overview, quick start +├── docs/ # General documentation +│ ├── README.md # Documentation index +│ ├── architecture/ # System architecture +│ │ ├── overview.md # High-level architecture +│ │ ├── component-diagram.md # Component relationships +│ │ ├── threading-model.md # Threading architecture +│ │ └── data-flow.md # Data flow diagrams +│ ├── api/ # API documentation +│ │ ├── public-api.md # Public API reference +│ │ └── internal-api.md # Internal API reference +│ ├── integration/ # Integration guides +│ │ ├── build-setup.md # Build environment setup +│ │ ├── platform-porting.md # Porting to new platforms +│ │ └── testing.md # Test procedures +│ └── troubleshooting/ # Debug guides +│ ├── memory-issues.md # Memory debugging +│ ├── threading-issues.md # Thread debugging +│ └── common-errors.md # Common error solutions +└── source/ # Source code + └── docs/ # Component-specific docs + ├── bulkdata/ # Mirrors source structure + │ ├── README.md # Component overview + │ └── profile-management.md + ├── protocol/ + │ ├── README.md + │ └── http-architecture.md + └── scheduler/ + ├── README.md + └── scheduling-algorithm.md +``` + +### Document Types + +#### 1. **Architecture Documentation** (`docs/architecture/`) +- System overview and design principles +- Component relationships and dependencies +- Threading and concurrency models +- Data flow and state machines +- Memory management strategies +- Platform abstraction layers + +#### 2. **API Documentation** (`docs/api/`) +- Public API reference with examples +- Internal API documentation +- Function contracts and preconditions +- Thread-safety guarantees +- Memory ownership semantics +- Error handling conventions + +#### 3. **Component Documentation** (`source/docs/`) +- Per-component technical details +- Algorithm explanations +- Implementation notes +- Performance characteristics +- Resource usage (memory, CPU, threads) +- Dependencies and interfaces + +#### 4. **Integration Guides** (`docs/integration/`) +- Build system setup +- Platform porting guides +- Configuration options +- Testing procedures +- Deployment checklists + +#### 5. **Troubleshooting Guides** (`docs/troubleshooting/`) +- Common error scenarios +- Debug techniques +- Log analysis +- Memory profiling +- Thread race detection + +## Documentation Process + +### Step 1: Analyze the Code + +Before writing documentation: + +1. **Read the source code** - Understand implementation +2. **Identify key abstractions** - Classes, structs, modules +3. **Map dependencies** - What calls what, data flow +4. **Find synchronization** - Mutexes, conditions, atomics +5. **Trace resource lifecycle** - Allocations, ownership, cleanup +6. **Review existing docs** - Check for patterns and style + +### Step 2: Create Structure + +For each component: + +```markdown +# Component Name + +## Overview +Brief 2-3 sentence description of purpose and role. + +## Architecture +High-level design with diagrams. + +## Key Components +List main structures, functions, modules. + +## Threading Model +How threads interact, synchronization primitives. + +## Memory Management +Allocation patterns, ownership, lifecycle. + +## API Reference +Public functions with signatures and examples. + +## Usage Examples +Common use cases with code snippets. + +## Error Handling +Error codes, failure modes, recovery. + +## Performance Considerations +Resource usage, bottlenecks, optimization tips. + +## Platform Notes +Platform-specific behavior or requirements. + +## Testing +How to test, test coverage, known issues. + +## See Also +Cross-references to related documentation. +``` + +### Step 3: Add Diagrams + +Use Mermaid for visual documentation: + +#### Component Diagram +```mermaid +graph TB + A[Client] --> B[Connection Pool] + B --> C[CURL Handle 1] + B --> D[CURL Handle 2] + B --> E[CURL Handle N] + C --> F[libcurl] + D --> F + E --> F + F --> G[HTTP Server] +``` + +#### Sequence Diagram +```mermaid +sequenceDiagram + participant Client + participant Pool + participant CURL + participant Server + + Client->>Pool: Request handle + Pool->>Pool: Lock mutex + Pool-->>Client: Return handle + Client->>CURL: Configure request + Client->>CURL: Execute + CURL->>Server: HTTP Request + Server-->>CURL: Response + CURL-->>Client: Result + Client->>Pool: Release handle + Pool->>Pool: Signal condition +``` + +#### State Diagram +```mermaid +stateDiagram-v2 + [*] --> Uninitialized + Uninitialized --> Initialized: init() + Initialized --> Running: start() + Running --> Paused: pause() + Paused --> Running: resume() + Running --> Stopped: stop() + Stopped --> [*] +``` + +#### Data Flow Diagram +```mermaid +flowchart LR + A[Marker Event] --> B{Event Type} + B -->|Component| C[Component Marker] + B -->|Event| D[Event Marker] + C --> E[Profile Matcher] + D --> E + E --> F[Report Generator] + F --> G[HTTP Sender] +``` + +### Step 4: Add Code Examples + +Provide clear, compilable examples: + +#### Good Example Structure +```markdown +### Example: Creating a Profile + +This example shows how to create and configure a telemetry profile. + +**Prerequisites:** +- Telemetry system initialized +- Valid configuration file + +**Code:** +```c +#include "profile.h" +#include + +int main(void) { + profile_t* profile = NULL; + int ret = 0; + + // Create profile with name and interval + ret = profile_create("MyProfile", 60, &profile); + if (ret != 0) { + fprintf(stderr, "Failed to create profile: %d\n", ret); + return -1; + } + + // Add marker to profile + ret = profile_add_marker(profile, "Component.Status", + MARKER_TYPE_COMPONENT); + if (ret != 0) { + fprintf(stderr, "Failed to add marker: %d\n", ret); + profile_destroy(profile); + return -1; + } + + // Activate profile + ret = profile_activate(profile); + if (ret != 0) { + fprintf(stderr, "Failed to activate profile: %d\n", ret); + profile_destroy(profile); + return -1; + } + + printf("Profile created and activated successfully\n"); + + // Cleanup + profile_destroy(profile); + return 0; +} +``` + +**Expected Output:** +``` +Profile created and activated successfully +``` + +**Notes:** +- Always check return values +- Call profile_destroy() even on error paths +- Profile name must be unique +``` +\`\`\` + +### Step 5: Document APIs + +For each public function: + +```markdown +### profile_create() + +Creates a new telemetry profile. + +**Signature:** +```c +int profile_create(const char* name, + unsigned int interval_sec, + profile_t** out_profile); +``` + +**Parameters:** +- `name` - Unique profile name (max 63 chars, non-NULL) +- `interval_sec` - Reporting interval in seconds (min: 60, max: 86400) +- `out_profile` - Output pointer to created profile (must be non-NULL) + +**Returns:** +- `0` - Success +- `-EINVAL` - Invalid parameter (NULL name/out_profile, invalid interval) +- `-ENOMEM` - Memory allocation failed +- `-EEXIST` - Profile with same name already exists + +**Thread Safety:** +Thread-safe. Uses internal mutex for profile list management. + +**Memory:** +Allocates memory for profile structure and name copy. Caller must call +`profile_destroy()` to free resources. + +**Example:** +See [Example: Creating a Profile](#example-creating-a-profile) + +**See Also:** +- profile_destroy() +- profile_activate() +- profile_add_marker() +``` + +### Step 6: Document Threading + +For multi-threaded components: + +```markdown +## Threading Model + +### Thread Overview + +| Thread Name | Purpose | Priority | Stack Size | +|------------|---------|----------|------------| +| Main | Initialization, message loop | Normal | Default | +| XConf Fetch | Configuration retrieval | Low | 64KB | +| Report Send | HTTP report transmission | Low | 64KB | +| Event Receiver | Marker event processing | High | 32KB | + +### Synchronization Primitives + +```c +// Global mutexes +static pthread_mutex_t pool_mutex = PTHREAD_MUTEX_INITIALIZER; +static pthread_mutex_t profile_mutex = PTHREAD_MUTEX_INITIALIZER; + +// Condition variables +static pthread_cond_t pool_cond = PTHREAD_COND_INITIALIZER; +static pthread_cond_t xconf_cond = PTHREAD_COND_INITIALIZER; +``` + +### Lock Ordering + +To prevent deadlocks, always acquire locks in this order: + +1. `profile_mutex` (profile list) +2. `pool_mutex` (connection pool) +3. Individual profile locks + +**Example:** +```c +// CORRECT: Proper lock ordering +pthread_mutex_lock(&profile_mutex); +profile_t* p = find_profile_locked(name); +pthread_mutex_lock(&pool_mutex); +// ... use both resources ... +pthread_mutex_unlock(&pool_mutex); +pthread_mutex_unlock(&profile_mutex); + +// WRONG: Deadlock risk! +pthread_mutex_lock(&pool_mutex); +pthread_mutex_lock(&profile_mutex); // May deadlock! +``` + +### Thread Safety Guarantees + +| Function | Thread Safety | Notes | +|----------|---------------|-------| +| profile_create() | Thread-safe | Uses profile_mutex | +| profile_destroy() | Thread-safe | Uses profile_mutex | +| profile_add_marker() | Not thread-safe | Call before activation only | +| send_report() | Thread-safe | Uses pool_mutex | +``` + +### Step 7: Document Memory Management + +```markdown +## Memory Management + +### Allocation Patterns + +```mermaid +graph TD + A[profile_create] --> B[malloc profile_t] + B --> C[strdup name] + B --> D[malloc markers array] + E[profile_add_marker] --> F[realloc markers] + G[profile_destroy] --> H[free markers] + H --> I[free name] + I --> J[free profile_t] +``` + +### Ownership Rules + +1. **profile_t**: Owned by caller after profile_create() +2. **Marker strings**: Copied; caller retains original ownership +3. **Report data**: Owned by sender; freed after transmission + +### Lifecycle Example + +```c +// Creation phase +profile_t* prof = NULL; +profile_create("test", 60, &prof); // Allocates memory + +// Configuration phase +profile_add_marker(prof, "mark1", TYPE_EVENT); // May realloc +profile_add_marker(prof, "mark2", TYPE_EVENT); // May realloc + +// Active phase - no allocations +profile_activate(prof); + +// Destruction phase +profile_destroy(prof); // Frees all memory +prof = NULL; // Prevent use-after-free +``` + +### Memory Budget + +Typical memory usage per component: + +| Component | Static | Dynamic (per item) | Notes | +|-----------|--------|-------------------|-------| +| Profile | 128 bytes | +32 bytes/marker | Preallocated list | +| Connection Pool | 512 bytes | +256 bytes/handle | Max 5 handles | +| Report Buffer | 0 | 64KB | Temporary, freed after send | + +**Total typical footprint**: ~150KB (5 profiles, 3 connections, 1 report) +``` + +## Best Practices + +### Writing Style + +1. **Be Concise**: Get to the point quickly +2. **Be Specific**: Use exact terms, not vague descriptions +3. **Be Accurate**: Test all code examples +4. **Be Complete**: Don't leave critical details unstated +5. **Be Consistent**: Follow established patterns + +### Code Examples + +- **Always compile-test** examples before documenting +- **Show error handling** - embedded systems need robust code +- **Include cleanup** - demonstrate proper resource management +- **Add context** - explain when/why to use the code +- **Keep focused** - one example, one concept + +### Diagrams + +- **Use Mermaid** for all diagrams (version control friendly) +- **Keep simple** - max 10-12 nodes per diagram +- **Label clearly** - all arrows and nodes need names +- **Show flow** - make direction obvious +- **Add legends** - explain symbols if needed + +### Cross-References + +Link related documentation: + +```markdown +## See Also + +- [Threading Model](../architecture/threading-model.md) - Overall thread architecture +- [Connection Pool API](connection-pool.md) - Pool management functions +- [Error Codes](../api/error-codes.md) - Complete error code reference +- [Build Guide](../integration/build-setup.md) - Compilation instructions +``` + +### Platform-Specific Notes + +Always document platform variations: + +```markdown +## Platform Notes + +### Linux +- Uses pthread for threading +- Requires libcurl 7.65.0+ +- mTLS via OpenSSL 1.1.1+ + +### RDKB Devices +- Integration with RDK logger (rdk_debug.h) +- Uses RBUS for IPC when available +- Memory constraints: limit to 8 profiles max + +### Constraints +- **Memory**: Tested with 64MB minimum +- **CPU**: ARMv7 or better +- **Storage**: 1MB for logs and cache +``` + +## Output Format + +### Component Documentation Template + +```markdown +# [Component Name] + +## Overview + +[2-3 sentence description] + +## Architecture + +[High-level design explanation] + +### Component Diagram +```mermaid +[Component relationship diagram] +``` + +## Key Components + +### [Structure/Type Name] + +[Description] + +```c +typedef struct { + // Fields with comments +} structure_t; +``` + +## Threading Model + +[Thread safety and synchronization] + +## Memory Management + +[Allocation patterns and ownership] + +## API Reference + +### [function_name()] + +[Full API documentation] + +## Usage Examples + +### Example: [Use Case] + +[Complete working example] + +## Error Handling + +[Error codes and recovery] + +## Performance + +[Resource usage and bottlenecks] + +## Testing + +[Test procedures and coverage] + +## See Also + +[Cross-references] +``` + +## Quality Checklist + +Before considering documentation complete: + +- [ ] All public APIs documented with signatures +- [ ] At least one working code example per major function +- [ ] Thread safety explicitly stated +- [ ] Memory ownership clearly documented +- [ ] Error codes and meanings listed +- [ ] Diagrams for complex flows +- [ ] Cross-references to related docs +- [ ] Platform-specific notes included +- [ ] Code examples compile and run +- [ ] Grammar and spelling checked +- [ ] Reviewed by component author + +## Maintenance + +Documentation is code: + +1. **Update with code changes** - docs and code change together +2. **Version documentation** - tag with releases +3. **Review periodically** - ensure accuracy quarterly +4. **Fix broken links** - validate references +5. **Deprecate carefully** - mark old features clearly + +### Deprecation Notice Template + +```markdown +## DEPRECATED: old_function() + +⚠️ **This function is deprecated as of v2.1.0** + +**Reason**: Memory leak risk in error paths + +**Alternative**: Use new_function() instead + +**Migration Example**: +```c +// Old way (deprecated) +old_function(param); + +// New way +new_function(param); +``` + +**Removal**: Scheduled for v3.0.0 (Est. Q2 2026) +``` + +## Tools Integration + +### Generate API Docs from Code + +Use Doxygen-style comments in code: + +```c +/** + * @brief Create a new telemetry profile + * + * Creates and initializes a profile structure. The caller is responsible + * for destroying the profile with profile_destroy() when done. + * + * @param[in] name Unique profile name (max 63 chars) + * @param[in] interval_sec Reporting interval (60-86400 seconds) + * @param[out] out_profile Pointer to receive created profile + * + * @return 0 on success, negative errno on failure + * @retval 0 Success + * @retval -EINVAL Invalid parameter + * @retval -ENOMEM Memory allocation failed + * @retval -EEXIST Profile already exists + * + * @note Thread-safe + * @see profile_destroy(), profile_activate() + * + * @par Example: + * @code + * profile_t* prof = NULL; + * int ret = profile_create("MyProfile", 300, &prof); + * if (ret == 0) { + * // Use profile... + * profile_destroy(prof); + * } + * @endcode + */ +int profile_create(const char* name, + unsigned int interval_sec, + profile_t** out_profile); +``` + +### Diagram Tools + +- **Mermaid Live Editor**: https://mermaid.live +- **VS Code Markdown Preview**: Built-in mermaid support +- **Documentation generators**: Can embed mermaid in output + +## Troubleshooting Common Documentation Issues + +### Issue: Code example doesn't compile + +**Solution**: Always test examples in isolation +```bash +# Extract example to test file +cat > test_example.c << 'EOF' +[paste example code] +EOF + +# Compile with project flags +gcc -Wall -Wextra -I../include test_example.c -o test_example + +# Run to verify +./test_example +``` + +### Issue: Diagram is too complex + +**Solution**: Break into multiple diagrams +- One high-level overview diagram +- Multiple focused detail diagrams +- Link them together in text + +### Issue: Outdated documentation + +**Solution**: Add CI check +```bash +# Check for TODOs in docs +grep -r "TODO\|FIXME\|XXX" docs/ && exit 1 + +# Check for broken links +markdown-link-check docs/**/*.md +``` + +## Examples From This Project + +See existing documentation for reference: +- [CURL Architecture](../../../source/docs/protocol/curl_usage_architecture.md) - Good example of architecture doc with diagrams +- [Memory Safety Skill](../memory-safety-analyzer/SKILL.md) - Example skill documentation +- [Build Instructions](../../../.github/instructions/build-system.instructions.md) - Integration guide example diff --git a/.github/skills/thread-safety-analyzer/SKILL.md b/.github/skills/thread-safety-analyzer/SKILL.md new file mode 100644 index 00000000..9d413f01 --- /dev/null +++ b/.github/skills/thread-safety-analyzer/SKILL.md @@ -0,0 +1,436 @@ +--- +name: thread-safety-analyzer +description: Analyze C/C++ code for thread safety issues including race conditions, deadlocks, and improper synchronization. Use when reviewing concurrent code or debugging threading issues. +--- + +# Thread Safety Analysis for Embedded C + +## Purpose + +Systematically analyze C/C++ code for thread safety issues that can cause race conditions, deadlocks, or performance degradation in embedded systems. + +## Usage + +Invoke this skill when: +- Reviewing multi-threaded code +- Debugging race conditions or deadlocks +- Optimizing synchronization overhead +- Validating thread creation and cleanup +- Investigating lock contention issues + +## Analysis Process + +### Step 1: Identify Shared Data + +Search for global and static variables: +- Global variables (especially non-const) +- Static variables in functions +- Shared heap allocations +- Reference-counted objects + +For each shared variable, verify: +1. How is it protected (mutex, atomic, etc.)? +2. Is the protection consistent across all accesses? +3. Are reads and writes both protected? +4. Is initialization thread-safe? + +### Step 2: Review Thread Creation + +Check all pthread_create calls: +- Are thread attributes used? +- Is stack size specified? +- Are threads detached or joinable? +- Is cleanup properly handled? + +```c +// CHECK FOR: +pthread_t thread; +pthread_create(&thread, NULL, func, arg); // BAD: No attributes + +// SHOULD BE: +pthread_attr_t attr; +pthread_attr_init(&attr); +pthread_attr_setstacksize(&attr, 64 * 1024); // Explicit size +pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); +pthread_create(&thread, &attr, func, arg); +pthread_attr_destroy(&attr); +``` + +### Step 3: Analyze Lock Usage + +For each mutex/rwlock: +- Is it initialized before use? +- Is it destroyed when done? +- Are lock/unlock pairs balanced? +- What is the lock ordering? +- Are locks held during expensive operations? + +Common patterns to check: +```c +// Pattern 1: Missing unlock on error path +pthread_mutex_lock(&lock); +if (error) return -1; // LEAK! +pthread_mutex_unlock(&lock); + +// Pattern 2: Lock ordering violation +// Thread 1: +pthread_mutex_lock(&a); +pthread_mutex_lock(&b); + +// Thread 2: +pthread_mutex_lock(&b); // Different order! +pthread_mutex_lock(&a); // DEADLOCK RISK! + +// Pattern 3: Heavy lock for simple operation +pthread_rwlock_wrlock(&lock); // Too heavy +counter++; +pthread_rwlock_unlock(&lock); +// Should use atomic_int instead +``` + +### Step 4: Check for Race Conditions + +Look for unprotected accesses to shared data: + +```c +// RACE: Read-modify-write without protection +if (shared_flag == 0) { // Thread 1 reads + shared_flag = 1; // Thread 2 also reads before Thread 1 writes +} + +// FIX: Use atomic or lock +pthread_mutex_lock(&lock); +if (shared_flag == 0) { + shared_flag = 1; +} +pthread_mutex_unlock(&lock); + +// OR: Use atomic compare-and-swap +int expected = 0; +atomic_compare_exchange_strong(&shared_flag, &expected, 1); +``` + +### Step 5: Verify Atomic Usage + +For atomic variables: +- Are they declared with proper type (atomic_int, atomic_bool)? +- Is memory ordering appropriate? +- Are non-atomic operations mixed with atomic ones? + +```c +// CHECK: +atomic_int counter; + +// GOOD: Atomic operations +atomic_fetch_add(&counter, 1); +int value = atomic_load(&counter); + +// BAD: Mixing atomic and non-atomic +counter++; // Non-atomic! Use atomic_fetch_add +``` + +### Step 6: Deadlock Detection + +Check for common deadlock patterns: + +1. **Circular wait**: Lock A → Lock B, Lock B → Lock A +2. **Lock held while waiting**: Mutex held during sleep/wait +3. **Missing timeout**: Indefinite blocking without timeout +4. **Signal under lock**: Condition signal while holding mutex + +```c +// Deadlock Pattern 1: Circular dependency +// Function 1: +lock(mutex_a); +lock(mutex_b); // Order: A, B + +// Function 2: +lock(mutex_b); +lock(mutex_a); // Order: B, A - DEADLOCK! + +// Deadlock Pattern 2: Lock held during expensive operation +lock(mutex); +expensive_network_call(); // Blocks other threads! +unlock(mutex); + +// Deadlock Pattern 3: No timeout +pthread_mutex_lock(&lock); // Waits forever if deadlock +``` + +### Step 7: Check Condition Variables + +For condition variables: +- Is wait always in a loop? +- Is predicate checked before and after wait? +- Is signal/broadcast done correctly? +- Is spurious wakeup handled? + +```c +// GOOD: Proper condition variable usage +pthread_mutex_lock(&mutex); +while (!condition) { // Loop for spurious wakeups + pthread_cond_wait(&cond, &mutex); +} +// ... use protected data ... +pthread_mutex_unlock(&mutex); + +// Signal: +pthread_mutex_lock(&mutex); +condition = true; +pthread_cond_signal(&cond); +pthread_mutex_unlock(&mutex); + +// BAD: Missing loop +pthread_mutex_lock(&mutex); +if (!condition) { // Should be 'while'! + pthread_cond_wait(&cond, &mutex); +} +pthread_mutex_unlock(&mutex); +``` + +## Common Issues and Fixes + +### Issue: Default Thread Stack Size + +```c +// PROBLEM: Wastes memory (8MB per thread) +pthread_t thread; +pthread_create(&thread, NULL, worker, arg); + +// FIX: Specify minimal stack size +pthread_attr_t attr; +pthread_attr_init(&attr); +pthread_attr_setstacksize(&attr, 64 * 1024); // 64KB +pthread_create(&thread, &attr, worker, arg); +pthread_attr_destroy(&attr); +``` + +### Issue: Heavy Synchronization + +```c +// PROBLEM: Reader-writer lock overkill +pthread_rwlock_t lock; +int counter; + +void increment() { + pthread_rwlock_wrlock(&lock); + counter++; + pthread_rwlock_unlock(&lock); +} + +// FIX: Use atomic operations +atomic_int counter; + +void increment() { + atomic_fetch_add(&counter, 1); // No lock needed +} +``` + +### Issue: Lock Ordering Violation + +```c +// PROBLEM: Different lock orders cause deadlock +// Thread 1: +void process_a_then_b() { + lock(&resource_a.lock); + lock(&resource_b.lock); + // ... +} + +// Thread 2: +void process_b_then_a() { + lock(&resource_b.lock); + lock(&resource_a.lock); // DEADLOCK! + // ... +} + +// FIX: Consistent ordering everywhere +void process_a_then_b() { + lock(&resource_a.lock); // Always A first + lock(&resource_b.lock); // Then B + // ... +} + +void process_b_then_a() { + lock(&resource_a.lock); // Always A first + lock(&resource_b.lock); // Then B + // ... +} +``` + +### Issue: Race in Lazy Initialization + +```c +// PROBLEM: Non-thread-safe initialization +static config_t* config = NULL; + +config_t* get_config() { + if (!config) { // Race here! + config = malloc(sizeof(config_t)); + init_config(config); + } + return config; +} + +// FIX: Use pthread_once +static pthread_once_t init_once = PTHREAD_ONCE_INIT; +static config_t* config = NULL; + +static void init_config_once() { + config = malloc(sizeof(config_t)); + init_config(config); +} + +config_t* get_config() { + pthread_once(&init_once, init_config_once); + return config; +} +``` + +### Issue: Missing Lock on Error Path + +```c +// PROBLEM: Lock not released on error +int process_data(data_t* shared) { + pthread_mutex_lock(&shared->lock); + + if (validate(shared) != 0) { + return -1; // BUG: Lock not released! + } + + update(shared); + pthread_mutex_unlock(&shared->lock); + return 0; +} + +// FIX: Unlock on all paths +int process_data(data_t* shared) { + int ret = 0; + + pthread_mutex_lock(&shared->lock); + + if (validate(shared) != 0) { + ret = -1; + goto cleanup; + } + + update(shared); + +cleanup: + pthread_mutex_unlock(&shared->lock); + return ret; +} +``` + +### Issue: Long Critical Section + +```c +// PROBLEM: Expensive operation under lock +pthread_mutex_lock(&lock); +for (int i = 0; i < 1000000; i++) { + compute(); // Blocks other threads! +} +shared_result = final_value; +pthread_mutex_unlock(&lock); + +// FIX: Minimize critical section +int result = 0; +for (int i = 0; i < 1000000; i++) { + result += compute(); // No lock +} + +pthread_mutex_lock(&lock); +shared_result = result; // Lock only for update +pthread_mutex_unlock(&lock); +``` + +## Testing for Thread Safety + +### Compile with Thread Sanitizer + +```bash +# Build with thread sanitizer +gcc -g -fsanitize=thread -O1 source.c -o program -lpthread + +# Run +./program + +# Will report: +# - Data races +# - Lock ordering issues +# - Potential deadlocks +``` + +### Run Helgrind + +```bash +# Check for thread safety issues +valgrind --tool=helgrind \ + --track-lockorders=yes \ + ./program + +# Reports: +# - Race conditions +# - Lock order violations +# - Possible deadlocks +``` + +### Stress Testing + +```c +// Test under high concurrency +#define NUM_THREADS 100 +#define ITERATIONS 10000 + +void stress_test() { + pthread_t threads[NUM_THREADS]; + + for (int i = 0; i < NUM_THREADS; i++) { + pthread_create(&threads[i], NULL, worker, NULL); + } + + for (int i = 0; i < NUM_THREADS; i++) { + pthread_join(threads[i], NULL); + } + + // Verify invariants + assert(shared_counter == NUM_THREADS * ITERATIONS); +} +``` + +## Output Format + +Provide findings as: + +``` +## Thread Safety Analysis + +### Critical Issues (must fix) +1. [file.c:123] Race condition - unprotected access to shared_flag +2. [file.c:456] Deadlock potential - lock order violation (A→B vs B→A) +3. [file.c:789] Lock leak - mutex not released on error path + +### Warnings (should fix) +1. [file.c:234] Default thread stack - wastes 8MB per thread +2. [file.c:567] Heavy lock - use atomic_int instead of mutex +3. [file.c:890] Long critical section - holds lock during I/O + +### Recommendations +1. Establish lock ordering convention (document in header) +2. Use pthread_once for singleton initialization +3. Replace reader-writer locks with atomics for counters +4. Add thread sanitizer to CI pipeline + +### Suggested Fixes +[Provide specific code changes for each issue] +``` + +## Verification + +After fixes: +1. Thread sanitizer shows no errors +2. Helgrind reports clean +3. Stress tests pass consistently +4. Lock contention metrics acceptable +5. No deadlocks under load testing +6. Code review confirms thread safety diff --git a/CHANGELOG.md b/CHANGELOG.md index e687f448..a006a076 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,10 +4,19 @@ All notable changes to this project will be documented in this file. Dates are d Generated by [`auto-changelog`](https://github.com/CookPete/auto-changelog). +#### [1.8.2](https://github.com/rdkcentral/telemetry/compare/1.8.1...1.8.2) + +- RDKB-63722:Analyze and fix/mitigate memory leaks from curl_easy_perform calls [`#287`](https://github.com/rdkcentral/telemetry/pull/287) +- Agentic development and maintenance support [`#278`](https://github.com/rdkcentral/telemetry/pull/278) +- RDKEMW-10467: Fix Invalid time values caused by drift [`#212`](https://github.com/rdkcentral/telemetry/pull/212) + #### [1.8.1](https://github.com/rdkcentral/telemetry/compare/1.8.0...1.8.1) +> 27 February 2026 + - RDK-60476: Reduce default connection pool size to 1 [`#260`](https://github.com/rdkcentral/telemetry/pull/260) - RDK-60805: Adding L1 unit test cases for reportprofiles [`#265`](https://github.com/rdkcentral/telemetry/pull/265) +- Changelog updates for 1.8.1 release [`074a1d3`](https://github.com/rdkcentral/telemetry/commit/074a1d32e33924ce0f39e7cd7aad5bd6750376d4) #### [1.8.0](https://github.com/rdkcentral/telemetry/compare/1.7.4...1.8.0) diff --git a/README.md b/README.md new file mode 100644 index 00000000..6b0e8e7a --- /dev/null +++ b/README.md @@ -0,0 +1,291 @@ +# Telemetry 2.0 + +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) +[![C](https://img.shields.io/badge/Language-C-blue.svg)](https://en.wikipedia.org/wiki/C_(programming_language)) +[![Platform](https://img.shields.io/badge/Platform-Embedded%20Linux-orange.svg)](https://www.yoctoproject.org/) + +A lightweight, efficient telemetry framework for RDK (Reference Design Kit) embedded devices. + +## Overview + +Telemetry 2.0 provides real-time monitoring, event collection, and reporting capabilities optimized for resource-constrained embedded devices such as set-top boxes, gateways, and IoT devices. + +### Key Features + +- ⚡ **Efficient**: Connection pooling and batch reporting +- 🔒 **Secure**: mTLS support for encrypted communication +- 📊 **Flexible**: Profile-based configuration (JSON/XConf) +- 🔧 **Platform-Independent**: Multiple architecture support + +### Architecture Highlights + +```mermaid +graph TB + A[Telemetry Events/Markers] --> B[Profile Matcher] + B --> C[Report Generator] + C --> D[HTTP Connection Pool] + D --> E[XConf Server / Data Collector] + F[XConf Client] -.->|Config| B + G[Scheduler] -.->|Triggers| C +``` + +## Quick Start + +### Prerequisites + +- GCC 4.8+ or Clang 3.5+ +- pthread library +- libcurl 7.65.0+ +- cJSON library +- OpenSSL 1.1.1+ (for mTLS) + +### Build + +```bash +# Clone repository +git clone https://github.com/rdkcentral/telemetry.git +cd telemetry + +# Configure +autoreconf -i +./configure + +# Build +make + +# Install +sudo make install +``` + +### Docker Development + +Refer to the provided Docker container for a consistent development environment: + +https://github.com/rdkcentral/docker-device-mgt-service-test + + +See [Build Setup Guide](docs/integration/build-setup.md) for detailed build options. + +### Basic Usage + +```c +#include "telemetry2_0.h" + +int main(void) { + // Initialize telemetry + if (t2_init() != 0) { + fprintf(stderr, "Failed to initialize telemetry\n"); + return -1; + } + + // Send a marker event + t2_event_s("SYS_INFO_DeviceBootup", "Device started successfully"); + + // Cleanup + t2_uninit(); + return 0; +} +``` + +Compile: `gcc -o myapp myapp.c -ltelemetry` + +## Documentation + +📚 **[Complete Documentation](docs/README.md)** + +### Key Documents + +- **[Architecture Overview](docs/architecture/overview.md)** - System design and components +- **[API Reference](docs/api/public-api.md)** - Public API documentation +- **[Developer Guide](docs/integration/developer-guide.md)** - Getting started +- **[Build Setup](docs/integration/build-setup.md)** - Build configuration +- **[Testing Guide](docs/integration/testing.md)** - Test procedures + +### Component Documentation + +Individual component documentation is in [`source/docs/`](source/docs/): + +- [Bulk Data System](source/docs/bulkdata/README.md) - Profile and marker management +- [HTTP Protocol](source/docs/protocol/README.md) - Communication layer +- [Scheduler](source/docs/scheduler/README.md) - Report scheduling +- [XConf Client](source/docs/xconf-client/README.md) - Configuration retrieval + +## Project Structure + +``` +telemetry/ +├── source/ # Source code +│ ├── bulkdata/ # Profile and marker management +│ ├── protocol/ # HTTP/RBUS communication +│ ├── scheduler/ # Report scheduling +│ ├── xconf-client/ # Configuration retrieval +│ ├── dcautil/ # Log marker utilities +│ └── test/ # Unit tests (gtest/gmock) +├── include/ # Public headers +├── config/ # Configuration files +├── docs/ # Documentation +├── containers/ # Docker development environment +└── test/ # Functional tests +``` + +## Configuration + +### Profile Configuration + +Telemetry uses JSON profiles to define what data to collect: + +```json +{ + "Profile": "RDKB_BasicProfile", + "Version": "1.0.0", + "Protocol": "HTTP", + "EncodingType": "JSON", + "ReportingInterval": 300, + "Parameters": [ + { + "type": "dataModel", + "name": "Device.DeviceInfo.Manufacturer" + }, + { + "type": "event", + "eventName": "bootup_complete" + } + ] +} +``` + +See [Profile Configuration Guide](docs/integration/profile-configuration.md) for details. + +### Environment Variables + +| Variable | Description | Default | +|----------|-------------|---------| +| `T2_ENABLE_DEBUG` | Enable debug logging | `0` | +| `T2_PROFILE_PATH` | Default profile directory | `/etc/DefaultT2Profile.json` | +| `T2_XCONF_URL` | XConf server URL | - | +| `T2_REPORT_URL` | Report upload URL | - | + +## Development + +### Running Tests + +```bash +# Unit tests +make check + +# Functional tests +cd test +./run_ut.sh + +# Code coverage +./cov_build.sh +``` + +### Development Container + +Use the provided Docker container for consistent development: + https://github.com/rdkcentral/docker-device-mgt-service-test + +```bash +cd docker-device-mgt-service-test +docker compose up -d +``` + +A directory above the current directory is mounted as a volume in /mnt/L2_CONTAINER_SHARED_VOLUME . +Login to the container as follows: +```bash +docker exec -it native-platform /bin/bash +cd /mnt/L2_CONTAINER_SHARED_VOLUME/telemetry +sh test/run_ut.sh +``` + +See [Docker Development Guide](containers/README.md) for more details. + +## Platform Support + +Telemetry 2.0 is designed to be platform-independent and has been tested on: + +- **RDK-B** (Broadband devices) +- **RDK-V** (Video devices) +- **Linux** (x86_64, ARM, ARM64) +- **Yocto Project** builds + +See [Platform Porting Guide](docs/integration/platform-porting.md) for porting to new platforms. + + +## Contributing + +We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. + +### Development Workflow + +1. Fork the repository +2. Create a feature branch (`git checkout -b feature/amazing-feature`) +3. Make your changes +4. Add tests for new functionality +5. Ensure all L1 and L2 tests pass +6. Commit your changes (`git commit -m 'Add amazing feature'`) +7. Push to the branch (`git push origin feature/amazing-feature`) +8. Open a Pull Request + +### Code Style + +- Follow existing C code style and ensure astyle formatting and checks pass with below commands +```bash + find . -name '*.c' -o -name '*.h' | xargs astyle --options=.astylerc + find . -name '*.orig' -type f -delete + ``` + +- Use descriptive variable names +- Document all public APIs +- Add unit tests for new functions +- Add functional tests for new features + +See [Coding Guidelines](.github/instructions/c-embedded.instructions.md) for details. + +## Troubleshooting + +### Common Issues + +**Q: Telemetry not sending reports** +- Check network connectivity +- Verify XConf URL configuration +- Review logs in `/var/log/telemetry/` + +**Q: High memory usage** + +- Reduce number of active profiles +- Decrease reporting intervals +- Check for memory leaks with valgrind + +**Q: Build errors** + +- Ensure all dependencies installed +- Check compiler version (GCC 4.8+) +- Review build logs for missing libraries + +See [Troubleshooting Guide](docs/troubleshooting/common-errors.md) for more solutions. + +## License + +This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. + +## Acknowledgments + +- RDK Management LLC +- RDK Community Contributors +- Open Source Community + +## Contact + +- **Repository**: https://github.com/rdkcentral/telemetry +- **Issues**: https://github.com/rdkcentral/telemetry/issues +- **RDK Central**: https://rdkcentral.com + +## Changelog + +See [CHANGELOG.md](CHANGELOG.md) for version history and release notes. + +--- + +**Built for the RDK Community** diff --git a/containers/CHANGELOG.md b/containers/CHANGELOG.md deleted file mode 100644 index 5757c5e7..00000000 --- a/containers/CHANGELOG.md +++ /dev/null @@ -1,13 +0,0 @@ -### Changelog - -All notable changes to this project will be documented in this file. Dates are displayed in UTC. - -Generated by [`auto-changelog`](https://github.com/CookPete/auto-changelog). - -#### 1.0.1 - -- Add Level 2 Integration Tests [`#9`](https://github.com/rdkcentral/telemetry/pull/9) -- RDK-53835 Telemetry gives compilation issues when building without mount-utils. [`#7`](https://github.com/rdkcentral/telemetry/pull/7) -- RDKE-281 Change Licenses.txt in headers to LICENSE [`#5`](https://github.com/rdkcentral/telemetry/pull/5) -- Switch ssh to https in containers build script [`#2`](https://github.com/rdkcentral/telemetry/pull/2) -- Import of Comcast source (develop) [`43f60a7`](https://github.com/rdkcentral/telemetry/commit/43f60a7f8788a1544abc84b924f5d845d2d368f9) diff --git a/containers/README.md b/containers/README.md index 3f4bff3f..5f81552d 100644 --- a/containers/README.md +++ b/containers/README.md @@ -5,12 +5,20 @@ This container is intended to be used for faster development and integration tes It includes a pre-built version of the application and all the dependencies required to run the application. All tools required to build the application are also included in the container. +This project reuses the existing dockers from https://github.com/rdkcentral/docker-device-mgt-service-test, which provides a base image with all the necessary tools and dependencies for building and testing the application. + +- [native-platform](https://github.com/rdkcentral/docker-device-mgt-service-test/pkgs/container/docker-device-mgt-service-test%2Fnative-platform) + +The application is built and tested inside the container using the existing build and test scripts. +Container images are built and pushed to Docker Hub as part of the CI/CD pipeline for the project. Images are available at : + +- [mockxconf](https://github.com/rdkcentral/docker-device-mgt-service-test/pkgs/container/docker-device-mgt-service-test%2Fmockxconf) + A container that mocks the required XCONF services is also included as a separate container within the same docker network. The mock XCONF services are used to simulate the XCONF services that are required by the application. Service will be available at `http://mockxconf:50050` . In the current version mock WEBCONFIG services are not included. -An alternative utility is provided to mock a very similar setter commands that will set the IPC name space `Device.X_RDKCENTRAL-COM_T2.ReportProfilesMsgPack` with the same format as the actual WEBCONFIG service. Utilities to set multiprofiles in JSON format will also be included in the container in the next release. Until next release please use the following command to set the multiprofiles in JSON format. @@ -23,47 +31,6 @@ rbuscli setvalues Device.X_RDKCENTRAL-COM_T2.ReportProfiles string '' ## Prerequisites + A working installation of Docker and Docker Compose is required to run this container. Host machine should have relevant credentials to clone dependant repositories from rdkcentral github. - -## Steps to build the container -1. Clone the repository to the host machine. -2. Change directory to the cloned repository's sub-directory `containers`. -3. Run the following command to build the container: -``` -./build.sh -``` - -## Steps to run the container and mock services - -### Pre-requisites -1. Ensure there is a directory named L2_CONTAINER_SHARED_VOLUME crated in home directory of the host machine. -2. If user desires to mount a different directory, please update the `docker-compose.yaml` file with the desired path. -3. Both containers mount the same directory to share the data between them and is available at `/mnt/L2_CONTAINER_SHARED_VOLUME` directory in the container. - -From within same location where compose.yaml is available, run the following command to start the container: -``` -docker-compose up -``` - -## Logging inside individual application container -### Native Platform container -``` -docker exec -it native-platform /bin/bash -``` -### Webservices container -- The services are enabled with https support with selfsigned certificates. -- Selfsigned certificates are shared between the containers. -- Leveraging the capabilities of docker network, self-signed certificates are added to the trusted certificates of the container which makes the services accessible over https without any certificate warnings. - - -#### Services are available at -`https://mockxconf:50051/dataLakeMock` ==> Data upload service -`https://mockxconf:50050/loguploader/getT2DCMSettings` ==> DCA settings service - -- Data source for the services is available at `/etc/xconf/xconf-dcm-response.json` directory. - -#### Login to the container using the following command: -``` -docker exec -it mockxconf /bin/bash -``` \ No newline at end of file diff --git a/containers/build.sh b/containers/build.sh deleted file mode 100755 index f8b9b741..00000000 --- a/containers/build.sh +++ /dev/null @@ -1,45 +0,0 @@ -#!/bin/bash -########################################################################## -# If not stated otherwise in this file or this component's LICENSE -# file the following copyright and licenses apply: -# -# Copyright 2024 RDK Management -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -########################################################################## - - -# Build container that provides mock xconf service -cd mock-xconf -docker build --no-cache -t mock-xconf:latest -f Dockerfile . -cd - - - -cd native-platform -rm -rf rdk_logger -rm -rf WebconfigFramework - -git clone https://github.com/rdkcentral/rdk_logger.git -git clone https://github.com/rdkcentral/WebconfigFramework.git -# Dump the contents of /etc/xconf/certs/mock-xconf-server-cert.pem from above container into a file called mock-xconf-server-cert.pem -docker run --rm mockxconf:latest cat /etc/xconf/certs/mock-xconf-server-cert.pem > mock-xconf-server-cert.pem - -docker build -t native-platform:latest -f Dockerfile . - -rm -f mock-xconf-server-cert.pem -rm -rf rdk_logger -rm -rf WebconfigFramework -cd - - - - diff --git a/containers/compose.yaml b/containers/compose.yaml deleted file mode 100644 index a30ac4ee..00000000 --- a/containers/compose.yaml +++ /dev/null @@ -1,19 +0,0 @@ -services: - - mock-xconf: - image: "ghcr.io/rdkcentral/docker-device-mgt-service-test/mockxconf:latest" - container_name: "mockxconf" - ports: - - "50050:50050" - - "50051:50051" - volumes: - - ../:/mnt/L2_CONTAINER_SHARED_VOLUME - - - l2-container: - image: "ghcr.io/rdkcentral/docker-device-mgt-service-test/native-platform" - container_name: "native-platform" - depends_on: - - mock-xconf - volumes: - - ../:/mnt/L2_CONTAINER_SHARED_VOLUME diff --git a/containers/mock-xconf/Dockerfile b/containers/mock-xconf/Dockerfile deleted file mode 100644 index 41aca212..00000000 --- a/containers/mock-xconf/Dockerfile +++ /dev/null @@ -1,54 +0,0 @@ -FROM ubuntu:latest -ARG DEBIAN_FRONTEND=noninteractive - - -#COPY ./index.html /usr/local/apache2/htdocs/ -#COPY data.json /usr/local/apache2/htdocs/data - -# This is to mimic the legacy XCONF service api availability for the test cases -# In a new environment bring up, would have hosted the service without references to loguploader -# However to support the legacy backend service, the XCONF response even today has references to loguploader settings -# RUN mkdir -p /usr/local/apache2/htdocs/loguploader -# COPY xconf-dcm-response.json /usr/local/apache2/htdocs/loguploader/getT2DCMSettings - -RUN apt-get update && apt-get install -y vim curl wget nodejs npm openssl - -RUN mkdir -p /opt -RUN cd /opt -RUN openssl genrsa -out key.pem 2048 -RUN openssl req -new -days 730 -key key.pem -out csr.pem -subj "/C=US/ST=CA/L=San Francisco/O=Platform Security/OU=Platform Security/CN=mockxconf" -RUN openssl x509 -req -in csr.pem -signkey key.pem -out cert.pem - -# Create a directory to store the certificates -RUN mkdir -p /etc/xconf/certs -RUN mv key.pem /etc/xconf/certs/mock-xconf-server-key.pem -RUN mv cert.pem /etc/xconf/certs/mock-xconf-server-cert.pem -RUN rm -rf /opt/* - -# TODO: Upgrade to mtls support depending on the offerings from wrappers for SSA - - -RUN mkdir -p /mnt/L2_CONTAINER_SHARED_VOLUME - -# Reduce image size -RUN apt-get clean && rm -rf /var/lib/apt/lists/* - -# Install the apps and dependencies -RUN mkdir -p /etc/xconf/ -COPY xconf-dcm-response.json /etc/xconf/xconf-dcm-response.json -COPY xconf-dcm-response1.json /etc/xconf/xconf-dcm-response1.json - -RUN mkdir -p /usr/local/bin -COPY ./entrypoint.sh /usr/local/bin/entrypoint.sh -RUN chmod +x /usr/local/bin/entrypoint.sh - -COPY data-lake-mock.js /usr/local/bin/data-lake-mock.js -RUN chmod +x /usr/local/bin/data-lake-mock.js - -COPY getT2DCMSettings.js /usr/local/bin/getT2DCMSettings.js - -CMD ["/usr/local/bin/entrypoint.sh"] - -EXPOSE 50050 -EXPOSE 50051 - diff --git a/containers/mock-xconf/data-lake-mock.js b/containers/mock-xconf/data-lake-mock.js deleted file mode 100644 index 7469d3a3..00000000 --- a/containers/mock-xconf/data-lake-mock.js +++ /dev/null @@ -1,135 +0,0 @@ -/* - * If not stated otherwise in this file or this component's LICENSE file the - * following copyright and licenses apply: - * - * Copyright 2024 RDK Management - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. -*/ - -const https = require('node:https'); -const path = require('node:path'); -const fs = require('node:fs'); -const url = require('node:url'); - -var saveXconfJson = false; -var saved_XconfJson = {}; - -var saveReportJson = false; -var saved_ReportJson = {}; - -const options = { - key: fs.readFileSync(path.join('/etc/xconf/certs/mock-xconf-server-key.pem')), - cert: fs.readFileSync(path.join('/etc/xconf/certs/mock-xconf-server-cert.pem')), - port : 50051 -}; - -function handleAdminSupportReport(req, res){ - const queryObject = url.parse(req.url, true).query; - if (queryObject.saveRequest === 'true') { - saveReportJson = true; - }else if (queryObject.saveRequest === 'false'){ - saveReportJson = false; - saved_ReportJson = {}; - } - if ( queryObject.returnData === 'true'){ - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end(JSON.stringify(saved_ReportJson)); - return; - } - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end('Message received at Data Lake Mock Server\n'); - return; -} - -function handleAdminSupportXconf(req, res){ - const queryObject = url.parse(req.url, true).query; - if (queryObject.saveRequest === 'true') { - saveXconfJson = true; - }else if (queryObject.saveRequest === 'false'){ - saveXconfJson = false; - saved_XconfJson = {}; - } - if ( queryObject.returnData === 'true'){ - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end(JSON.stringify(saved_XconfJson)); - return; - } - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end('Message received at Data Lake Mock Server\n'); - return; -} - -/** - * Handles the incoming request and logs the data received - * @param {http.IncomingMessage} req - * @param {http.ServerResponse} res - */ -function requestHandler(req, res) { - - if (req.url.startsWith('/dataLakeMockXconf')) { - var data = ''; - req.on('data', function(chunk) { - data += chunk; - }); - req.on('end', function() { - console.log('Data received: ' + data); - if (saveXconfJson == true){ - const postData = JSON.parse(body); - - saved_XconfJson[new Date().toISOString()] = { ...postData }; - } - }); - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end('Message received at Data Lake Mock Server\n'); - - } - else if (req.url.startsWith('/dataLakeMockReportProfile')){ - var data = ''; - req.on('data', function(chunk) { - data += chunk; - }); - req.on('end', function() { - console.log('Data received: ' + data); - if (saveReportJson == true){ - const postData = JSON.parse(body); - saved_ReportJson[new Date().toISOString()] = { ...postData }; - } - }); - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end('Message received at Data Lake Mock Server\n'); - - } else if (req.url.startsWith('/adminSupportXconf')){ - - return handleAdminSupportXconf(req, res); - - } else if(req.url.startsWith('/adminSupportReport')){ - - return handleAdminSupportReport(req, res); - - }else { - res.writeHead(404, {'Content-Type': 'text/plain'}); - res.end('Not Found\n'); - } -} - - - -const serverinstance = https.createServer(options, requestHandler); -serverinstance.listen( - options.port, - () => { - console.log('Data Lake Mock Server running at https://localhost:50051/'); - } -); - diff --git a/containers/mock-xconf/data.json b/containers/mock-xconf/data.json deleted file mode 100644 index 493ec265..00000000 --- a/containers/mock-xconf/data.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "Hello" : "World" -} diff --git a/containers/mock-xconf/entrypoint.sh b/containers/mock-xconf/entrypoint.sh deleted file mode 100644 index 95449cc0..00000000 --- a/containers/mock-xconf/entrypoint.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/sh - -#set -m - -node /usr/local/bin/data-lake-mock.js & - -#httpd-foreground -node /usr/local/bin/getT2DCMSettings.js & - - -## Keep the container running . Running an independent process will help in simulating scenarios of webservices going down and coming up -while true ; do echo "Mocked webservice heartbeat ..." && sleep 5 ; done \ No newline at end of file diff --git a/containers/mock-xconf/getT2DCMSettings.js b/containers/mock-xconf/getT2DCMSettings.js deleted file mode 100644 index f94e8fd3..00000000 --- a/containers/mock-xconf/getT2DCMSettings.js +++ /dev/null @@ -1,154 +0,0 @@ -/* - * If not stated otherwise in this file or this component's LICENSE file the - * following copyright and licenses apply: - * - * Copyright 2024 RDK Management - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. -*/ - -const https = require('node:https'); -const path = require('node:path'); -const fs = require('node:fs'); -const url = require('node:url'); - -const options = { - key: fs.readFileSync(path.join('/etc/xconf/certs/mock-xconf-server-key.pem')), - cert: fs.readFileSync(path.join('/etc/xconf/certs/mock-xconf-server-cert.pem')), - port: 50050 -}; - -let save_request = false; -let savedrequest_json={}; - -/** - * Function to read JSON file and return the data - */ -function readJsonFile(count) { - if(count == 0){ - var filePath = path.join('/etc/xconf', 'xconf-dcm-response.json'); - } - else{ - var filePath = path.join('/etc/xconf', 'xconf-dcm-response1.json'); - - } - try { - const fileData = fs.readFileSync(filePath, 'utf8'); - return JSON.parse(fileData); - } catch (error) { - console.error('Error reading or parsing JSON file:', error); - return null; - } -} - -function handleT2DCMSettings(req, res, queryObject, file) { - let data = ''; - req.on('data', function(chunk) { - data += chunk; - }); - req.on('end', function() { - console.log('Data received: ' + data); - }); - - if (save_request) { - savedrequest_json[new Date().toISOString()] = { ...queryObject }; - } - - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end(JSON.stringify(readJsonFile(file))); - return; -} - -function handleAdminSet(req, res, queryObject) { - if (queryObject.saveRequest === 'true') { - save_request = true; - } else if (queryObject.saveRequest === 'false') { - save_request = false; - savedrequest_json={}; - } -} - -function handleAdminGet(req, res, queryObject) { - if (queryObject.returnData === 'true') { - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end(JSON.stringify(savedrequest_json)); - return; - } - res.writeHead(200); - res.end('Admin Get Unknown Request'); -} - -/** - * Handles the incoming request and logs the data received - * @param {http.IncomingMessage} req - The incoming request object - * @param {http.ServerResponse} res - The server response object - */ -function requestHandler(req, res) { - const queryObject = url.parse(req.url, true).query; - console.log('Query Object: ' + JSON.stringify(queryObject)); - console.log('Request received: ' + req.url); - console.log('json'+JSON.stringify(savedrequest_json)); - if (req.method === 'GET') { - - if (req.url.startsWith('/loguploader/getT2DCMSettings')) { - - return handleT2DCMSettings(req, res, queryObject,0); - - } - else if (req.url.startsWith('/adminSupportSet')) { - - handleAdminSet(req, res, queryObject); - - } - else if (req.url.startsWith('/adminSupportGet')) { - - return handleAdminGet(req, res, queryObject); - - } - else if (req.url.startsWith('/loguploader404/getT2DCMSettings')) { - res.writeHead(404); - res.end("404 No Content"); - return; - } - else if (req.url.startsWith('/loguploader1/getT2DCMSettings')){ - return handleT2DCMSettings(req, res, queryObject,1); - } -} -else if (req.method === 'POST') { - // TO BE IMPLEMENTED - if (req.url.startsWith('/updateT2DCMSettings')) { - let body = ''; - req.on('data', chunk => { - body += chunk.toString(); - }); - req.on('end', () => { - const postData = JSON.parse(body); - const redirect_json = { ...postData }; - redirect_json[new Date().toISOString()] = { ...queryObject };;// Example of adding a timestamped entry - - res.writeHead(200, {'Content-Type': 'application/json'}); - res.end(JSON.stringify(redirect_json)); - }); - } - -} - res.writeHead(200); - res.end("Server is Up Please check the request...."); -} -const serverInstance = https.createServer(options, requestHandler); -serverInstance.listen( - options.port, - () => { - console.log('XCONF DCM Mock Server running at https://localhost:50050/'); - } -); \ No newline at end of file diff --git a/containers/mock-xconf/index.html b/containers/mock-xconf/index.html deleted file mode 100644 index 8be6678f..00000000 --- a/containers/mock-xconf/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - A mock xconf service provide is running - - -

Select which response you would prefer as response for getlogUploaderSettings

- -
- - - -
- - - - - - - diff --git a/containers/mock-xconf/xconf-dcm-response.json b/containers/mock-xconf/xconf-dcm-response.json deleted file mode 100644 index d317ed65..00000000 --- a/containers/mock-xconf/xconf-dcm-response.json +++ /dev/null @@ -1,120 +0,0 @@ -{ - "urn:settings:GroupName": "Generic", - "urn:settings:CheckOnReboot": false, - "urn:settings:TimeZoneMode": "UTC", - "urn:settings:CheckSchedule:cron": "37 5 * * *", - "urn:settings:CheckSchedule:DurationMinutes": 300, - "urn:settings:LogUploadSettings:Message": null, - "urn:settings:LogUploadSettings:Name": "Generic", - "urn:settings:LogUploadSettings:NumberOfDays": 0, - "urn:settings:LogUploadSettings:UploadRepositoryName": "S3", - "urn:settings:LogUploadSettings:UploadRepository:URL": "https://upload.test.url", - "urn:settings:LogUploadSettings:UploadRepository:uploadProtocol": "HTTP", - "urn:settings:LogUploadSettings:UploadOnReboot": false, - "urn:settings:LogUploadSettings:UploadImmediately": false, - "urn:settings:LogUploadSettings:upload": true, - "urn:settings:LogUploadSettings:UploadSchedule:cron": "36 6 * * *", - "urn:settings:LogUploadSettings:UploadSchedule:levelone:cron": null, - "urn:settings:LogUploadSettings:UploadSchedule:leveltwo:cron": null, - "urn:settings:LogUploadSettings:UploadSchedule:levelthree:cron": null, - "urn:settings:LogUploadSettings:UploadSchedule:TimeZoneMode": "UTC", - "urn:settings:LogUploadSettings:UploadSchedule:DurationMinutes": 240, - "urn:settings:VODSettings:Name": null, - "urn:settings:VODSettings:LocationsURL": null, - "urn:settings:VODSettings:SRMIPList": null, - "urn:settings:TelemetryProfile": { - "@type": "PermanentTelemetryProfile", - "id": "9e9fce57-8053-423e-a647-a537ca8b7899", - "telemetryProfile": [ - { - "header": "SYS_ERROR_KernelPanic_reboot", - "content": "Received reboot_reason as:kernel-panic", - "type": "BootTime.log", - "pollingFrequency": "0" - }, - { - "header": "SYS_ERROR_PeerDown_reboot", - "content": "Received reboot_reason as:Peer_down", - "type": "BootTime.log", - "pollingFrequency": "0" - }, - { - "header": "SYS_ERROR_Thermal_reboot", - "content": "Received reboot_reason as:thermal", - "type": "BootTime.log", - "pollingFrequency": "0" - }, - { - "header": "RF_ERROR_Wan_down", - "content": "wan_service-status is stopped, take log back up", - "type": "Consolelog.txt.0", - "pollingFrequency": "0" - }, - { - "header": "RF_ERROR_wan_restart", - "content": "wan_service-status is started again, upload logs", - "type": "Consolelog.txt.0", - "pollingFrequency": "0" - }, - { - "header": "SYS_ERROR_Drop_cache", - "content": "test-and-diagnostic", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "SYS_ERROR_LOGUPLOAD_FAILED", - "content": "rdklogger", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "Total_online_clients_split", - "content": "ccsp-lm-lite", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "WIFI_INFO_TEST_offline", - "content": "test-component", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "WIFI_INFO_TEST_online", - "content": "test-component", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "btime_clientconn_split", - "content": "ccsp-lm-lite", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "RF_ERROR_IPV4PingFailed", - "content": "test-and-diagnostic", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "RF_ERROR_IPV6PingFailed", - "content": "test-and-diagnostic", - "type": "", - "pollingFrequency": "0" - }, - { - "header": "RF_ERROR_TEST_stopped", - "content": "test-and-diagnostic", - "type": "", - "pollingFrequency": "0" - } - ], - "schedule": "*/15 * * * *", - "expires": 0, - "telemetryProfile:name": "Generic_Profile", - "uploadRepository:URL": "https://mockxconf:50051/dataLakeMockXconf", - "uploadRepository:uploadProtocol": "HTTP" - } -} \ No newline at end of file diff --git a/containers/mock-xconf/xconf-dcm-response1.json b/containers/mock-xconf/xconf-dcm-response1.json deleted file mode 100644 index 8316d361..00000000 --- a/containers/mock-xconf/xconf-dcm-response1.json +++ /dev/null @@ -1,48 +0,0 @@ -{ - "urn:settings:GroupName": "Generic", - "urn:settings:CheckOnReboot": false, - "urn:settings:TimeZoneMode": "UTC", - "urn:settings:CheckSchedule:cron": "37 5 * * *", - "urn:settings:CheckSchedule:DurationMinutes": 300, - "urn:settings:LogUploadSettings:Message": null, - "urn:settings:LogUploadSettings:Name": "Generic", - "urn:settings:LogUploadSettings:NumberOfDays": 0, - "urn:settings:LogUploadSettings:UploadRepositoryName": "S3", - "urn:settings:LogUploadSettings:UploadRepository:URL": "https://secure.s3.bucket.test.url", - "urn:settings:LogUploadSettings:UploadRepository:uploadProtocol": "HTTP", - "urn:settings:LogUploadSettings:UploadOnReboot": false, - "urn:settings:LogUploadSettings:UploadImmediately": false, - "urn:settings:LogUploadSettings:upload": true, - "urn:settings:LogUploadSettings:UploadSchedule:cron": "36 6 * * *", - "urn:settings:LogUploadSettings:UploadSchedule:levelone:cron": null, - "urn:settings:LogUploadSettings:UploadSchedule:leveltwo:cron": null, - "urn:settings:LogUploadSettings:UploadSchedule:levelthree:cron": null, - "urn:settings:LogUploadSettings:UploadSchedule:TimeZoneMode": "UTC", - "urn:settings:LogUploadSettings:UploadSchedule:DurationMinutes": 240, - "urn:settings:VODSettings:Name": null, - "urn:settings:VODSettings:LocationsURL": null, - "urn:settings:VODSettings:SRMIPList": null, - "urn:settings:TelemetryProfile": { - "@type": "PermanentTelemetryProfile", - "id": "9e9fce57-8053-423e-a647-a537ca8b7899", - "telemetryProfile": [ - { - "header": "SYS_GREP_TEST", - "content": "This is Test Log", - "type": "test.log", - "pollingFrequency": "0" - }, - { - "header": "SYS_EVENT_TEST", - "content": "generic", - "type": "", - "pollingFrequency": "0" - } - ], - "schedule": "*/1 * * * *", - "expires": 0, - "telemetryProfile:name": "NEW TEST PROFILE", - "uploadRepository:URL": "https://mockxconf:50051/dataLakeMockXconf", - "uploadRepository:uploadProtocol": "HTTP" - } -} \ No newline at end of file diff --git a/containers/native-platform/Dockerfile b/containers/native-platform/Dockerfile deleted file mode 100644 index 0677499e..00000000 --- a/containers/native-platform/Dockerfile +++ /dev/null @@ -1,102 +0,0 @@ -FROM ubuntu:jammy -ARG DEBIAN_FRONTEND=noninteractive -# Get the revision number from the build environment -# This must be a release tag derived from a github relase event context -ARG REVISION=0.0.0 - -# Add instructions to install autotools -RUN apt-get update && apt-get install -y build-essential \ - wget openssl tar vim git git-lfs - - -RUN apt-get install -y libtool autotools-dev automake zlib1g-dev ninja-build meson - -RUN apt-get install -y libglib2.0-dev libcurl4-openssl-dev \ - libmsgpack-dev libsystemd-dev libssl-dev libcjson-dev python3-pip libsqlite3-dev - -RUN apt-get install -y libgtest-dev libgmock-dev libjansson-dev libbsd-dev tcl-dev \ - libboost-all-dev libwebsocketpp-dev libcunit1 libcunit1-dev libunwind-dev - -RUN apt-get update && apt-get install -y gdb valgrind lcov g++ wget gperf curl - -# Add additional pytest packages -RUN apt-get update && apt-get install -y \ - python3-pytest python3-pytest-cov software-properties-common - -RUN add-apt-repository ppa:deadsnakes/ppa && apt-get update && apt-get install -y \ - python3-pytest-bdd - -RUN pip3 install requests pytest-ordering pytest-json-report pytest-html msgpack - -# Install CMake -RUN apt-get update && apt-get install -y cmake - -# Install gtest libraries -RUN cd /usr/src/googletest/googlemock/ && mkdir build && cmake .. && make && make install - -RUN apt-get update && apt-get install -y \ - liblog4c-dev - -COPY ./mock-xconf-server-cert.pem /usr/share/ca-certificates/mock-xconf-server-cert.pem -RUN chmod 644 /usr/share/ca-certificates/mock-xconf-server-cert.pem -RUN echo "mock-xconf-server-cert.pem" >> /etc/ca-certificates.conf -RUN update-ca-certificates --fresh - -# Build and install test binary that acts as a provider for all mandatory RFC parameters -RUN mkdir -p /opt -COPY dependent_rdk_pkg_installer.sh /opt/dependent_rdk_pkg_installer.sh -RUN chmod +x /opt/dependent_rdk_pkg_installer.sh && /opt/dependent_rdk_pkg_installer.sh -RUN rm -rf /opt/dependent_rdk_pkg_installer.sh - -COPY rdk_logger /opt/rdk_logger -RUN cd /opt/rdk_logger && autoreconf --install && ./configure && make && make install -RUN rm -rf /opt/rdk_logger -COPY debug.ini /etc/debug.ini -COPY log4crc /etc/log4crc - -COPY WebconfigFramework /opt/WebconfigFramework -RUN cd /opt/WebconfigFramework && export INSTALL_DIR='/usr/local'&& \ -export CFLAGS="-I${INSTALL_DIR}/include/rtmessage -I${INSTALL_DIR}/include/msgpack -I${INSTALL_DIR}/include/rbus -I${INSTALL_DIR}/include" && \ -export LDFLAGS="-L${INSTALL_DIR}/lib" && \ -autoreconf --install && \ -./configure --prefix=/usr/local && make && make install && cp -r include/* /usr/local/include/ -RUN rm -rf /opt/WebconfigFramework - -# Mock implementation of RFC provider in target device -COPY ./ /opt/containers -RUN cd /opt/containers/mock-rfc-providers && export INSTALL_DIR='/usr/local' && \ -export CFLAGS="-I${INSTALL_DIR}/include/rtmessage -I${INSTALL_DIR}/include/msgpack -I${INSTALL_DIR}/include/rbus -I${INSTALL_DIR}/include" && \ -export LDFLAGS="-L${INSTALL_DIR}/lib" && \ -autoreconf --install && \ -./configure --prefix=/usr/local && make && make install -RUN rm -rf /opt/containers - -# Trim down the docker image size -RUN rm -rf /var/lib/apt/lists/* - -# Emulate the device side settings -RUN mkdir -p /opt/logs -RUN mkdir -p /etc -RUN echo "BUILD_TYPE=PROD" > /etc/device.properties - -RUN echo "DEVICE_NAME=DEVELOPER_CONTAINER" >> /etc/device.properties -RUN echo "LOG_PATH=/opt/logs/" > /etc/include.properties -RUN echo "PERSISTENT_PATH=/opt/" >> /etc/include.properties -RUN echo "imagename=T2_Container_${REVISION}" >> /version.txt - -# Custom exepectation from RDK stack to have a timezone file in the stack -RUN mkdir -p /opt/persistent -RUN echo 'US/Mountain' > /opt/persistent/timeZoneDST - -# Create A Shared Volume -RUN mkdir -p /mnt/L2_CONTAINER_SHARED_VOLUME - -COPY entrypoint.sh /usr/local/bin/entrypoint.sh -RUN chmod +x /usr/local/bin/entrypoint.sh -RUN echo 'export PATH=$PATH:/usr/local/bin' >> ~/.bashrc -RUN echo 'export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:/usr/local/lib:$LD_LIBRARY_PATH' >> ~/.bashrc - -# Set the entry point command -#CMD ["/bin/bash"] - -CMD ["/usr/local/bin/entrypoint.sh"] diff --git a/containers/native-platform/debug.ini b/containers/native-platform/debug.ini deleted file mode 100644 index 89923d7f..00000000 --- a/containers/native-platform/debug.ini +++ /dev/null @@ -1,24 +0,0 @@ -########################################################################## -# If not stated otherwise in this file or this component's Licenses.txt -# file the following copyright and licenses apply: -# -# Copyright 2016 RDK Management -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -########################################################################## -#### Logging #### - -EnableMPELog = TRUE - -SEPARATE.LOGFILE.SUPPORT=TRUE -LOG.RDK.T2 = FATAL ERROR WARNING NOTICE INFO DEBUG \ No newline at end of file diff --git a/containers/native-platform/dependent_rdk_pkg_installer.sh b/containers/native-platform/dependent_rdk_pkg_installer.sh deleted file mode 100644 index ab2ab4ce..00000000 --- a/containers/native-platform/dependent_rdk_pkg_installer.sh +++ /dev/null @@ -1,31 +0,0 @@ -########################################################################## -# If not stated otherwise in this file or this component's LICENSE -# file the following copyright and licenses apply: -# -# Copyright 2024 RDK Management -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -########################################################################## - -# Clone and build rbus -export RBUS_ROOT=/usr -export RBUS_INSTALL_DIR=${RBUS_ROOT}/local -mkdir -p $RBUS_INSTALL_DIR -cd $RBUS_ROOT - - -git clone https://github.com/rdkcentral/rbus -cmake -Hrbus -Bbuild/rbus -DBUILD_FOR_DESKTOP=ON -DCMAKE_BUILD_TYPE=Debug -make -C build/rbus && make -C build/rbus install - -#rtrouted -f -l DEBUG \ No newline at end of file diff --git a/containers/native-platform/entrypoint.sh b/containers/native-platform/entrypoint.sh deleted file mode 100644 index b7827afc..00000000 --- a/containers/native-platform/entrypoint.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/usr/bin/env bash - -########################################################################## -# If not stated otherwise in this file or this component's LICENSE -# file the following copyright and licenses apply: -# -# Copyright 2024 RDK Management -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -########################################################################## - -export RBUS_ROOT=/ -export RBUS_INSTALL_DIR=/usr/local -export PATH=${RBUS_INSTALL_DIR}/bin:${PATH} -export LD_LIBRARY_PATH=${RBUS_INSTALL_DIR}/lib:${LD_LIBRARY_PATH} - -# Build and install RFC parameter provider - -rt_pid=`pidof rtrouted` -if [ ! -z "$rt_routed" ]; then - kill -9 `pidof rtrouted` -fi - -rm -fr /tmp/rtroute* -rtrouted -l DEBUG - -/usr/local/bin/rfc_provider - -/bin/bash \ No newline at end of file diff --git a/containers/native-platform/log4crc b/containers/native-platform/log4crc deleted file mode 100644 index 9568354b..00000000 --- a/containers/native-platform/log4crc +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - 0 - - 0 - 1 - - - - - - - - - - - \ No newline at end of file diff --git a/containers/native-platform/mock-rfc-providers/Makefile.am b/containers/native-platform/mock-rfc-providers/Makefile.am deleted file mode 100644 index ea2317dc..00000000 --- a/containers/native-platform/mock-rfc-providers/Makefile.am +++ /dev/null @@ -1,30 +0,0 @@ - -########################################################################## -# If not stated otherwise in this file or this component's LICENSE -# file the following copyright and licenses apply: -# -# Copyright 2024 RDK Management -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -########################################################################## - -bin_PROGRAMS = rfc_provider -RBUS_INSTALL_DIR = /usr/local -LOCAL_DIR = /usr/local - -rfc_provider_SOURCES = rfc_provider.c -rfc_provider_CFLAGS = -I$(RBUS_INSTALL_DIR)/include/rtmessage -I$(RBUS_INSTALL_DIR)/include/msgpack -I$(RBUS_INSTALL_DIR)/include/rbus -I$(RBUS_INSTALL_DIR)/include -rfc_provider_LDFLAGS = -L${RBUS_INSTALL_DIR}/lib -L$(LOCAL_DIR)/lib -lpthread -lm -lrt -lrtMessage -lrbus -lmsgpackc - - -rfc_provider_LDADD = -L${RBUS_INSTALL_DIR}/lib -lrtMessage -lrbus -lmsgpackc -lpthread -lm -lrt \ No newline at end of file diff --git a/containers/native-platform/mock-rfc-providers/configure.ac b/containers/native-platform/mock-rfc-providers/configure.ac deleted file mode 100644 index 5252e421..00000000 --- a/containers/native-platform/mock-rfc-providers/configure.ac +++ /dev/null @@ -1,5 +0,0 @@ -AC_INIT([rfc_provider_for_t2], [1.0], [your_email@example.com]) -AM_INIT_AUTOMAKE([-Wall -Werror foreign]) -AC_PROG_CC -AC_CONFIG_FILES([Makefile]) -AC_OUTPUT \ No newline at end of file diff --git a/containers/native-platform/mock-rfc-providers/rfc_provider.c b/containers/native-platform/mock-rfc-providers/rfc_provider.c deleted file mode 100644 index 1b90def4..00000000 --- a/containers/native-platform/mock-rfc-providers/rfc_provider.c +++ /dev/null @@ -1,299 +0,0 @@ -/* - * If not stated otherwise in this file or this component's LICENSE file - * the following copyright and licenses apply: - * - * Copyright 2016 RDK Management - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. -*/ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include - - -#define NUMBER_OF_DATA_ELEMENTS 10 - -#define DATA_HANDLER_MACRO \ - { \ - multiRbusProvider_SampleDataGetHandler, \ - multiRbusProvider_SampleDataSetHandler, \ - NULL, \ - NULL, \ - NULL, \ - NULL \ - } - - -rbusHandle_t handle1; -rbusError_t multiRbusProvider_SampleDataGetHandler(rbusHandle_t handle, rbusProperty_t prop, rbusGetHandlerOptions_t* opts); -rbusError_t multiRbusProvider_SampleDataSetHandler(rbusHandle_t handle, rbusProperty_t property, rbusSetHandlerOptions_t* opts); - -// Add a string array to store the data element values -char dataElementValues[NUMBER_OF_DATA_ELEMENTS][256]; - -char* dataElemenInitValues[NUMBER_OF_DATA_ELEMENTS] = -{ - "https://mockxconf:50050/loguploader/getT2DCMSettings", - "true", - "AA:BB:CC:DD:EE:FF", - "10.0.0.1", - "Platform_Container_Test_DEVICE", - "Platform_Cotainer_1.0.0", - "false", - "global", - "false", - "DOCKER" -}; - -void init_dataElementValues() -{ - for (int i = 0; i < NUMBER_OF_DATA_ELEMENTS; i++) - { - memset(dataElementValues[i], 0, 256); - strcpy(dataElementValues[i], dataElemenInitValues[i]); - } -} - -// Add a string array to store the data element names -char* const dataElementNames[NUMBER_OF_DATA_ELEMENTS] = -{ - "Device.DeviceInfo.X_RDKCENTRAL-COM_RFC.Feature.Telemetry.ConfigURL", - "Device.DeviceInfo.X_RDKCENTRAL-COM_RFC.Feature.Telemetry.Enable", - "Device.DeviceInfo.X_COMCAST-COM_STB_MAC", - "Device.DeviceInfo.X_COMCAST-COM_STB_IP", - "Device.DeviceInfo.X_RDKCENTRAL-COM_RFC.Feature.AccountInfo.AccountID", - "Device.DeviceInfo.X_COMCAST-COM_FirmwareFilename", - "Device.DeviceInfo.X_RDKCENTRAL-COM_RFC.Feature.Telemetry.MTLS.Enable", - "Device.DeviceInfo.X_RDKCENTRAL-COM_Syndication.PartnerId", - "Device.X_RDK_WebConfig.webcfgSubdocForceReset", - "Device.DeviceInfo.ModelName" -}; - - -/** - * @brief Structure representing a data element in the rbusDataElement_t array. - */ -rbusDataElement_t dataElements[NUMBER_OF_DATA_ELEMENTS] = -{ - { - dataElementNames[0], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[1], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - - }, - { - dataElementNames[2], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[3], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[4], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[5], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[6], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[7], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[8], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - }, - { - dataElementNames[9], // The name of the data element - RBUS_ELEMENT_TYPE_PROPERTY, // The type of the data element - DATA_HANDLER_MACRO - } -}; - - - - -/** - * @brief Signal handler function for handling the exit signal. - * - * This function is called when the program receives an exit signal. It performs the following tasks: - * - Unregisters data elements from two handles (handle1) using the rbus_unregDataElements function. - * - Closes handle1 using the rbus_close function. - * - Prints a message indicating that the provider is exiting. - * - Calls the exit function to terminate the program. - * - * @param sig The signal number. - */ -void exitHandler(int sig) -{ - printf("Caught signal %d\n", sig); - - int rc1 = rbus_unregDataElements(handle1, 1, dataElements); - if (rc1 != RBUS_ERROR_INVALID_HANDLE) - { - printf("provider: rbus_unregDataElements for handle1 err: %d\n", rc1); - } - - rc1 = rbus_close(handle1); - if (rc1 != RBUS_ERROR_INVALID_HANDLE) - { - printf("consumer: rbus_close handle1 err: %d\n", rc1); - } - printf("provider: exit\n"); - exit(0); -} - -int main(int argc, char* argv[]) -{ - (void)(argc); - (void)(argv); - - - int rc1 = RBUS_ERROR_SUCCESS; - - char componentName[] = "rfc_provider_for_t2"; - init_dataElementValues(); - - printf("provider: start\n"); - - rc1 = rbus_open(&handle1, componentName); - if (rc1 != RBUS_ERROR_SUCCESS) - { - printf("provider: First rbus_open for handle1 err: %d\n", rc1); - goto exit1; - } - - rc1 = rbus_regDataElements(handle1, NUMBER_OF_DATA_ELEMENTS, dataElements); - - // Add exit handler to catch signals and close rbus handles - signal(SIGINT, exitHandler); - signal(SIGTERM, exitHandler); - - while (1) - { - // Your code here - printf("provider: running ...\n"); - sleep(1); - } - -exit2: - rc1 = rbus_close(handle1); - if (rc1 != RBUS_ERROR_INVALID_HANDLE) - { - printf("consumer: rbus_close handle1 err: %d\n", rc1); - } - -exit1: - printf("provider: exit\n"); - exit(0); -} - -rbusError_t multiRbusProvider_SampleDataSetHandler(rbusHandle_t handle, rbusProperty_t prop, rbusSetHandlerOptions_t* opts) -{ - (void)handle; - (void)opts; - - char const* name = rbusProperty_GetName(prop); - rbusValue_t value = rbusProperty_GetValue(prop); - rbusValueType_t type = rbusValue_GetType(value); - - printf("Called set handler for [%s]\n", name); - -// For loop to iterate through the data element names and check if the name matches the name of the data element - for (int i = 0; i < NUMBER_OF_DATA_ELEMENTS; i++) - { - printf("dataElementNames[%d] = %s\n", i, dataElementNames[i]); - if (strcmp(name, dataElementNames[i]) == 0) - { - if (type == RBUS_STRING) - { - int len = 0; - char const* data = NULL; - data = rbusValue_GetString(value, &len); - printf("Called set handler for [%s] & value is %s\n", name, data); - // Clear the value in dataElementValues array - memset(dataElementValues[i], 0, strlen(dataElementValues[i])); - // Copy the new value to the dataElementValues array - strcpy(dataElementValues[i], data); - printf("Done setting value"); - } - else - { - printf("Cant Handle [%s]\n", name); - return RBUS_ERROR_INVALID_INPUT; - } - break; - } - } - - return RBUS_ERROR_SUCCESS; -} - -rbusError_t multiRbusProvider_SampleDataGetHandler(rbusHandle_t handle, rbusProperty_t property, rbusGetHandlerOptions_t* opts) -{ - (void)handle; - (void)opts; - rbusValue_t value; - int intData = 0; - char const* name; - - rbusValue_Init(&value); - name = rbusProperty_GetName(property); - - // For loop to iterate through the data element names and check if the name matches the name of the data element - for (int i = 0; i < NUMBER_OF_DATA_ELEMENTS; i++) - { - if (strcmp(name, dataElementNames[i]) == 0) - { - rbusValue_SetString(value, dataElementValues[i]); - break; - } - } - - rbusProperty_SetValue(property, value); - rbusValue_Release(value); - - return RBUS_ERROR_SUCCESS; -} - - diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000..f1288acb --- /dev/null +++ b/docs/README.md @@ -0,0 +1,129 @@ +# Telemetry 2.0 Documentation + +Welcome to the Telemetry 2.0 framework documentation. + +## Quick Links + +- [Architecture Overview](architecture/overview.md) - System design and components +- [API Reference](api/public-api.md) - Public API documentation +- [Developer Guide](integration/developer-guide.md) - Getting started for developers +- [Build Setup](integration/build-setup.md) - Build environment configuration +- [Testing Guide](integration/testing.md) - Test procedures and coverage + +## What is Telemetry 2.0? + +Telemetry 2.0 is a lightweight, efficient telemetry framework designed for embedded devices in the RDK (Reference Design Kit) ecosystem. It provides real-time monitoring, event collection, and reporting capabilities for resource-constrained devices such as set-top boxes, gateways, and IoT devices. + +### Key Features + +- **Lightweight**: Optimized for devices with <128MB RAM +- **Flexible**: Profile-based configuration via JSON or XConf +- **Efficient**: Connection pooling and batch reporting +- **Secure**: mTLS support for encrypted communication +- **Platform-Independent**: Supports multiple architectures and platforms + +## Documentation Structure + +``` +docs/ +├── architecture/ # System architecture and design +│ ├── overview.md # High-level system architecture +│ ├── components.md # Component relationships +│ ├── threading-model.md # Threading and synchronization +│ └── data-flow.md # Data flow and state machines +├── api/ # API documentation +│ ├── public-api.md # Public API reference +│ ├── internal-api.md # Internal API reference +│ └── error-codes.md # Error codes and meanings +├── integration/ # Integration and setup guides +│ ├── developer-guide.md # Getting started guide +│ ├── build-setup.md # Build configuration +│ ├── platform-porting.md # Porting to new platforms +│ └── testing.md # Testing procedures +└── troubleshooting/ # Debug and troubleshooting + ├── memory-issues.md # Memory debugging + ├── threading-issues.md # Thread debugging + └── common-errors.md # Common error solutions +``` + +## Component Documentation + +Detailed technical documentation for individual components is located in `source/docs/` following the source code structure: + +- [Bulk Data](../source/docs/bulkdata/README.md) - Profile and marker management +- [Protocol](../source/docs/protocol/README.md) - HTTP communication layer +- [Scheduler](../source/docs/scheduler/README.md) - Report scheduling +- [XConf Client](../source/docs/xconf-client/README.md) - Configuration retrieval +- [DCA Utilities](../source/docs/dcautil/README.md) - Log marker collection + +## Getting Started + +### For Developers + +1. **Read the [Architecture Overview](architecture/overview.md)** to understand the system design +2. **Follow the [Developer Guide](integration/developer-guide.md)** to set up your environment +3. **Review the [API Reference](api/public-api.md)** for available functions +4. **Check [Testing Guide](integration/testing.md)** for test procedures + +### For Platform Vendors + +1. **Review [Platform Porting Guide](integration/platform-porting.md)** for integration requirements +2. **Check [Build Setup](integration/build-setup.md)** for compilation options +3. **Review [Threading Model](architecture/threading-model.md)** for resource planning + +### For Maintainers + +1. **Review [Component Documentation](../source/docs/)** for implementation details +2. **Check [Troubleshooting Guides](troubleshooting/)** for common issues +3. **Refer to [Internal API](api/internal-api.md)** for module interfaces + +## Project Resources + +- **Repository**: https://github.com/rdkcentral/telemetry +- **Bug Reports**: GitHub Issues +- **Contributions**: See [CONTRIBUTING.md](../CONTRIBUTING.md) +- **License**: Apache 2.0 (see [LICENSE](../LICENSE)) + +## Documentation Conventions + +### Code Formatting + +- **Functions**: `function_name()` +- **Types**: `type_name_t` +- **Constants**: `CONSTANT_NAME` +- **Files**: [filename.c](../source/telemetry2_0.c) + +### Platform Tags + +- **[Linux]** - Linux-specific information +- **[RDKB]** - RDK-B specific information +- **[Embedded]** - General embedded platform info + +### Importance Indicators + +- ⚠️ **Warning** - Important information that could cause issues if ignored +- 💡 **Tip** - Helpful suggestion or best practice +- 📝 **Note** - Additional context or clarification +- ❌ **Deprecated** - Feature or API scheduled for removal + +## Contributing to Documentation + +Documentation improvements are welcome! When contributing: + +1. Follow the [Documentation Style Guide](.github/skills/technical-documentation-writer/SKILL.md) +2. Test all code examples +3. Use Mermaid for diagrams +4. Add cross-references to related docs +5. Update the table of contents + +## Need Help? + +- **For usage questions**: Check [Troubleshooting](troubleshooting/) +- **For build issues**: See [Build Setup](integration/build-setup.md) +- **For bugs**: File a GitHub issue +- **For API questions**: See [API Reference](api/public-api.md) + +--- + +**Last Updated**: March 2026 +**Version**: 2.0 diff --git a/docs/api/public-api.md b/docs/api/public-api.md new file mode 100644 index 00000000..60532a05 --- /dev/null +++ b/docs/api/public-api.md @@ -0,0 +1,486 @@ +# Telemetry 2.0 Public API Reference + +## Overview + +This document describes the public API for the Telemetry 2.0 framework. These functions provide the interface for applications to interact with the telemetry system. + +## Header Files + +```c +#include "telemetry2_0.h" +#include "telemetry_busmessage_sender.h" +``` + +## Initialization and Cleanup + +### t2_init() + +Initialize the telemetry system. + +**Signature:** +```c +void t2_init(char *component); +``` + +**Parameters:** +- `component` - Mandatory name of the component, used for marker to component mapping in profiles. Must be a non-NULL string. + +**Returns:** Nothing (`void`). + +**Thread Safety:** Not thread-safe. Must be called once during application startup before any other telemetry functions. + +**Example:** +```c +#include "telemetry2_0.h" +#include + +int main(void) { + // Initialize with component name (must not be NULL) + t2_init("MyComponent"); + + // ... application code ... + + t2_uninit(); + return 0; +} +``` + + +### t2_uninit() + +Uninitialize and cleanup telemetry system. + +**Signature:** +```c +void t2_uninit(void); +``` + +**Thread Safety:** Not thread-safe. Must be called only after all telemetry operations complete, typically during application shutdown. + +**Notes:** +- Stops all background threads +- Sends any pending reports (best effort) +- Frees all allocated resources +- Safe to call even if t2_init() failed + +**Example:** +```c +// Cleanup on exit +void cleanup(void) { + t2_uninit(); + // Other cleanup... +} + +int main(void) { + atexit(cleanup); + + // component name must be a valid non-NULL string + t2_init("MyComponent"); + + // Application runs... + + return 0; +} +``` + +## Event Reporting + +### t2_event_s() + +Send a string event marker. + +**Signature:** +```c +T2ERROR t2_event_s(const char* marker, const char* value); +``` + +**Parameters:** +- `marker` - Event marker name (non-NULL) +- `value` - Event value string (non-NULL); empty string `""` and `"0"` are silently ignored + +**Returns:** +- `T2ERROR_SUCCESS` - Event sent or queued +- `T2ERROR_FAILURE` - Send failed or event filtered + +**Thread Safety:** Thread-safe. Can be called from any thread. + +**Notes:** +- Event matched against active profile markers +- Empty value (`""`) or value `"0"` is ignored without sending +- Returns `T2ERROR_FAILURE` if `t2_init()` has not been called + +**Examples:** +```c +// Simple event +t2_event_s("Device_Boot_Complete", "success"); + +// Component status +t2_event_s("WIFI_Status", "connected"); + +// Error event +t2_event_s("Storage_Error", "disk_full"); + +// Formatted message +char msg[256]; +snprintf(msg, sizeof(msg), "Connection failed: %s", strerror(errno)); +t2_event_s("Network_Error", msg); +``` + +### t2_event_d() + +Send an integer or metric event marker. + +**Signature:** +```c +T2ERROR t2_event_d(const char* marker, int value); +``` + +**Parameters:** +- `marker` - Event marker name (non-NULL) +- `value` - Integer value; value `0` is silently ignored + +**Returns:** +- `T2ERROR_SUCCESS` - Event sent or queued +- `T2ERROR_FAILURE` - Send failed or event filtered + +**Thread Safety:** Thread-safe. + +**Notes:** +- Value `0` is ignored without sending (by design) + +**Examples:** +```c +// Signal strength +t2_event_d("WIFI_RSSI", -65); + +// CPU usage percentage +t2_event_d("SYS_ERROR_LOGUPLOAD_FAILED", 1); + +// Temperature reading +t2_event_d("SYS_INFO_NTP_SYNC_ERROR", 1); + +// Uptime in seconds +t2_event_d("System_Uptime", 3600); + +// Bandwidth in Mbps +t2_event_d("Download_Speed", 125); +``` + +### t2_event_f() + +Send a floating-point event marker. + +**Signature:** +```c +T2ERROR t2_event_f(const char* marker, double value); +``` + +**Parameters:** +- `marker` - Event marker name (non-NULL) +- `value` - Floating-point (double) value + +**Returns:** +- `T2ERROR_SUCCESS` - Event sent or queued +- `T2ERROR_FAILURE` - Send failed or event filtered + +**Thread Safety:** Thread-safe. + +**Examples:** +```c +// Latency measurement +t2_event_f("Connection_Latency_ms", 123.45); + +// Memory usage as percentage +t2_event_f("Memory_Usage_Pct", 67.8); + +// Query performance in milliseconds +t2_event_f("Query_Performance_ms", 45.2); +``` + + +## Usage Patterns + +### Basic Telemetry Integration + +```c +#include "telemetry2_0.h" +#include + +int main(void) { + // Initialize (returns void; component name must not be NULL) + t2_init("MyComponent"); + + // Send boot event + t2_event_s("Application_Start", "v1.0.0"); + + // Application runs... + + // Send shutdown event + t2_event_s("Application_Stop", "clean_shutdown"); + + // Cleanup + t2_uninit(); + return 0; +} +``` + +### Error Reporting + +```c +void handle_error(const char* component, int error_code) { + char marker[128]; + char details[256]; + + // Construct marker name + snprintf(marker, sizeof(marker), "%s_Error", component); + + // Construct error details + snprintf(details, sizeof(details), + "code=%d,message=%s", + error_code, strerror(error_code)); + + // Send event + t2_event_s(marker, details); + + // Also log locally + syslog(LOG_ERR, "%s: %s", marker, details); +} +``` + +### Performance Monitoring + +```c +void monitor_operation(const char* operation_name) { + struct timespec start, end; + double duration_ms; + char marker[128]; + + clock_gettime(CLOCK_MONOTONIC, &start); + + // Perform operation... + do_operation(); + + clock_gettime(CLOCK_MONOTONIC, &end); + + // Calculate duration + duration_ms = (end.tv_sec - start.tv_sec) * 1000.0 + + (end.tv_nsec - start.tv_nsec) / 1000000.0; + + // Report if slow: use t2_event_f for floating-point values + if (duration_ms > 100.0) { + snprintf(marker, sizeof(marker), "%s_Duration_ms", operation_name); + t2_event_f(marker, duration_ms); + } +} +``` + +### State Transitions + +```c +typedef enum { + STATE_IDLE, + STATE_CONNECTING, + STATE_CONNECTED, + STATE_DISCONNECTED +} connection_state_t; + +void report_state_change(connection_state_t old_state, + connection_state_t new_state) { + const char* state_names[] = { + "IDLE", "CONNECTING", "CONNECTED", "DISCONNECTED" + }; + + t2_event_s("Connection_State_Change", state_names[new_state]); +} +``` + +## Best Practices + +### 1. Marker Naming Conventions + +Use clear, hierarchical naming: + +```c +// GOOD +t2_event_s("WIFI_Connection_Success", "5GHz"); +t2_event_s("Storage_Disk_Full", "/var"); +t2_event_s("Process_Crash", "watchdog"); + +// AVOID +t2_event_s("event1", "data"); +t2_event_s("err", "err"); +``` + +**Recommended format**: `Component_SubComponent_Event` + +### 2. Value Format + +Provide structured, parseable values: + +```c +// GOOD - Structured +t2_event_s("Connection_Info", "ssid=MyNetwork,channel=6,rssi=-65"); + +// GOOD - Simple +t2_event_s("Boot_Status", "success"); + +// AVOID - Unstructured prose +t2_event_s("Event", "The connection was established after 3 retries"); +``` + +### 3. Error Handling + +Telemetry should never cause application failure: + +```c +// GOOD - Defensive guard before sending +if (marker_name && value) { + t2_event_s(marker_name, value); +} + +// GOOD - Check return value of event functions +if (t2_event_s("Boot_Status", "up") != T2ERROR_SUCCESS) { + syslog(LOG_WARNING, "Telemetry event send failed"); + // Application continues +} + +// NOTE: t2_init() returns void; it cannot fail with a return code. +// Always pass a valid non-NULL component name. +t2_init("MyComponent"); +``` + +### 4. Resource Efficiency + +Don't spam events: + +```c +// GOOD - Throttled +static time_t last_report = 0; +time_t now = time(NULL); + +if (now - last_report >= 60) { // Max once per minute + t2_event_d("CPU_Usage", get_cpu_usage()); + last_report = now; +} + +// AVOID - Too frequent +while (1) { + t2_event_d("CPU_Usage", get_cpu_usage()); // Don't! + usleep(1000); +} +``` + +### 5. Lifecycle Management + +```c +// GOOD - Proper lifecycle +void init_application(void) { + t2_init("MyComponent"); // Must pass valid non-NULL name + // Other init... +} + +void cleanup_application(void) { + // Other cleanup... + t2_uninit(); // Call last +} + +// In main +atexit(cleanup_application); +init_application(); +``` + +## Performance Considerations + +### Event Queue + +- **Queue Size**: 200 events +- **Behavior**: Newest events dropped when full +- **Recommendation**: Don't send >10 events/sec sustained + +### Memory + +- **Per Event**: ~100 bytes +- **Total Overhead**: ~1MB typical +- **Recommendation**: Limit marker count in profiles + +### CPU + +- **Event Processing**: <1ms per event +- **Report Generation**: ~10ms per report +- **Recommendation**: Events should be fire-and-forget + +## Common Patterns + +### Application Monitoring + +```c +// Track application lifecycle +void app_lifecycle_monitoring(void) { + t2_event_s("App_Start", VERSION); + + // ... application runs ... + + t2_event_s("App_Stop", "normal"); +} +``` + + +### Health Monitoring + +```c +void health_check(void) { + // Memory check (integer KB value) + long mem_free = get_free_memory_kb(); + if (mem_free < 1024) { // Low memory + t2_event_d("Memory_Low_KB", (int)mem_free); + } + + // Disk check (integer MB value) + long disk_free = get_free_disk_mb(); + if (disk_free < 100) { // Low disk + t2_event_d("Disk_Low_MB", (int)disk_free); + } + + // CPU check (floating point - use t2_event_f for double) + double cpu = get_cpu_usage(); + if (cpu > 90.0) { // High CPU + t2_event_f("CPU_High_Pct", cpu); + } +} +``` + +## Error Codes + +Event functions return `T2ERROR` (defined in `telemetry2_0.h`). `t2_init()` and `t2_uninit()` return `void`. + +| Code | Value | Meaning | +|------|-------|---------| +| `T2ERROR_SUCCESS` | `0` | Operation succeeded | +| `T2ERROR_FAILURE` | `1` | Generic failure | +| `T2ERROR_INVALID_ARGS` | — | NULL or invalid parameter passed | +| `T2ERROR_MEMALLOC_FAILED` | — | Memory allocation failed | +| `T2ERROR_COMPONENT_NULL` | — | `t2_init()` not called before event send | +| `T2ERROR_INVALID_PROFILE` | — | Profile data is malformed | +| `T2ERROR_PROFILE_NOT_FOUND` | — | Referenced profile does not exist | + +## Thread Safety Summary + +| Function | Return Type | Thread Safety | +|----------|-------------|---------------| +| `t2_init()` | `void` | ❌ Not thread-safe | +| `t2_uninit()` | `void` | ❌ Not thread-safe | +| `t2_event_s()` | `T2ERROR` | ✅ Thread-safe | +| `t2_event_d()` | `T2ERROR` | ✅ Thread-safe | +| `t2_event_f()` | `T2ERROR` | ✅ Thread-safe | + +## See Also + +- [Architecture Overview](../architecture/overview.md) - System design +- [Component Documentation](../../source/docs/) - Component details +- [Integration Guide](../integration/developer-guide.md) - Developer guide +- [Troubleshooting](../troubleshooting/common-errors.md) - Common issues + +--- + +**API Version**: 2.0 +**Last Updated**: March 2026 diff --git a/docs/architecture/overview.md b/docs/architecture/overview.md new file mode 100644 index 00000000..53967838 --- /dev/null +++ b/docs/architecture/overview.md @@ -0,0 +1,524 @@ +# Telemetry 2.0 Architecture Overview + +## System Overview + +Telemetry 2.0 is a lightweight, profile-based telemetry framework designed for embedded Linux devices with constrained resources. It provides real-time event collection, data model monitoring, and periodic reporting capabilities optimized for devices with limited memory (<128MB RAM) and CPU resources. + +### Design Goals + +1. **Minimal Resource Footprint** - Operate efficiently on memory-constrained devices +2. **Platform Independence** - Support multiple embedded platforms and architectures +3. **Flexible Configuration** - Dynamic profile-based configuration via JSON/XConf +4. **Reliable Reporting** - Guaranteed delivery with retry logic and caching +5. **Thread Safety** - Safe concurrent operation across multiple threads +6. **Extensibility** - Support for multiple protocols and encoding formats + +## High-Level Architecture + +```mermaid +graph TB + subgraph "External Systems" + XCONF[XConf Server] + COLLECTOR[Data Collector] + APPS[Applications] + end + + subgraph "Telemetry 2.0 Core" + subgraph "Configuration" + XC[XConf Client] + CFG[Config Manager] + end + + subgraph "Event Collection" + API[Public API] + ER[Event Receiver] + DM[Data Model Client] + end + + subgraph "Processing" + PM[Profile Manager] + MM[Marker Matcher] + RG[Report Generator] + end + + subgraph "Communication" + POOL[Connection Pool] + HTTP[HTTP Client] + RBUS[RBUS Client] + end + + subgraph "Support" + SCHED[Scheduler] + CACHE[Report Cache] + LOG[Logger] + end + end + + XCONF -->|Profiles| XC + XC --> CFG + CFG --> PM + + APPS -->|Events| API + API --> ER + DM -->|Parameters| MM + + ER --> MM + PM -->|Active Profiles| MM + MM --> RG + + SCHED -->|Trigger| RG + RG --> CACHE + CACHE --> POOL + + POOL --> HTTP + POOL --> RBUS + HTTP -->|Reports| COLLECTOR +``` + +## Key Components + +### 1. Configuration Layer + +#### XConf Client +- **Purpose**: Retrieve profile configurations from XConf server +- **Thread**: Dedicated background thread +- **Protocol**: HTTP/HTTPS with mTLS +- **Retry Logic**: Exponential backoff +- **Files**: `source/xconf-client/` + +#### Config Manager +- **Purpose**: Parse and validate profile configurations +- **Storage**: In-memory profile list +- **Persistence**: Optional disk caching +- **Files**: `source/bulkdata/profilexconf.c` + +### 2. Event Collection Layer + +#### Public API +- **Purpose**: Application interface for sending telemetry events +- **Functions**: + - `t2_event_s()` - String events + - `t2_event_d()` - Numeric events + - `t2_event_f()` - Formatted events +- **Thread Safety**: Fully thread-safe +- **Files**: `source/telemetry2_0.c`, `include/telemetry2_0.h` + +#### Event Receiver +- **Purpose**: Queue and process incoming events +- **Queue**: Mutex/condition-variable-protected queue (200 events max, see `T2EVENTQUEUE_MAX_LIMIT`) +- **Thread**: Dedicated event processing thread +- **Files**: `source/bulkdata/t2eventreceiver.c` + +#### Data Model Client +- **Purpose**: Retrieve TR-181 data model parameters +- **Protocol**: D-Bus (CCSP) or RBUS +- **Caching**: Parameter value cache with TTL +- **Files**: `source/ccspinterface/` + +### 3. Processing Layer + +#### Profile Manager +- **Purpose**: Manage profile lifecycle +- **Operations**: Create, activate, deactivate, destroy +- **Storage**: Linked list with mutex protection +- **Files**: `source/bulkdata/profile.c` + +#### Marker Matcher +- **Purpose**: Match events to profile markers +- **Algorithm**: Hash table lookup (O(1) average) +- **Concurrency**: Read-write lock for parallel matching +- **Files**: `source/bulkdata/t2markers.c` + +#### Report Generator +- **Purpose**: Assemble and format reports +- **Formats**: JSON, Protocol Buffers +- **Encoding**: UTF-8 +- **Files**: `source/reportgen/reportgen.c` + +### 4. Communication Layer + +#### Connection Pool +- **Purpose**: Manage reusable HTTP connections +- **Pool Size**: 1-5 connections (configurable) +- **Features**: Keep-alive, mTLS, retry logic +- **Thread Safety**: Mutex-protected handle acquisition +- **Files**: `source/protocol/http/multicurlinterface.c` + +#### HTTP Client +- **Purpose**: Transmit reports via HTTP/HTTPS +- **Library**: libcurl 7.65.0+ +- **Features**: Chunked encoding, compression, mTLS +- **Files**: `source/protocol/http/` + +#### RBUS Client +- **Purpose**: Alternative transport via RBUS +- **Use Case**: Local inter-process communication +- **Files**: `source/protocol/rbusMethod/` + +### 5. Support Components + +#### Scheduler +- **Purpose**: Trigger periodic report generation +- **Precision**: ±1 second typical +- **Method**: Condition variable timed wait +- **Files**: `source/scheduler/` + +#### Report Cache +- **Purpose**: Persist reports across reboots/network failures +- **Storage**: Filesystem-based queue +- **Cleanup**: FIFO with age limits +- **Location**: `/nvram/telemetry/` or `/tmp/` + +#### Logger +- **Purpose**: Diagnostic logging +- **Integration**: RDK logger (rdk_debug.h) +- **Levels**: ERROR, WARN, INFO, DEBUG +- **Files**: Integrated throughout + +## Data Flow + +### Event Processing Flow + +```mermaid +sequenceDiagram + participant App as Application + participant API as Public API + participant Queue as Event Queue + participant ER as Event Receiver + participant MM as Marker Matcher + participant Prof as Profile + participant RG as Report Generator + + App->>API: t2_event_s("WIFI_Connected", "5GHz") + API->>Queue: enqueue(event) + API-->>App: return + + Note over ER: Event Thread + ER->>Queue: dequeue() + Queue-->>ER: event + + ER->>MM: match_event(event) + activate MM + + MM->>MM: Hash lookup markers + MM->>Prof: Find matching profiles + + loop For each match + MM->>Prof: Lock profile + MM->>RG: Add to report + RG->>RG: Append JSON + MM->>Prof: Unlock profile + end + + deactivate MM + ER->>ER: event_count++ +``` + +### Report Generation Flow + +```mermaid +sequenceDiagram + participant Sched as Scheduler + participant Prof as Profile Manager + participant DM as Data Model + participant RG as Report Generator + participant Pool as Connection Pool + participant HTTP as HTTP Client + participant Server as Collection Server + + Note over Sched: Timer expires + Sched->>Prof: Get due profiles + Prof-->>Sched: Profile list + + loop For each profile + Sched->>RG: generate_report(profile) + activate RG + + RG->>Prof: Lock profile + RG->>Prof: Get accumulated events + + alt Has data model params + RG->>DM: Get parameter values + DM-->>RG: Values + end + + RG->>RG: Build JSON + RG->>Prof: Clear events + RG->>Prof: Unlock profile + + RG->>Pool: Acquire connection + Pool-->>RG: HTTP handle + + RG->>HTTP: POST report + HTTP->>Server: HTTPS request + Server-->>HTTP: 200 OK + HTTP-->>RG: Success + + RG->>Pool: Release connection + deactivate RG + end +``` + +### Configuration Update Flow + +```mermaid +sequenceDiagram + participant Server as XConf Server + participant XC as XConf Client + participant CFG as Config Parser + participant PM as Profile Manager + participant OLD as Old Profile + participant NEW as New Profile + + Note over XC: Periodic fetch or
configuration change + + XC->>Server: GET /xconf/dcm/settings + Server-->>XC: JSON configuration + + XC->>CFG: parse_profiles(json) + activate CFG + + CFG->>CFG: Validate JSON + CFG->>CFG: Extract profiles + + loop For each profile + CFG->>PM: Check if exists + + alt Profile exists + alt Version changed + PM->>OLD: Deactivate + PM->>OLD: Destroy + CFG->>NEW: Create new + PM->>NEW: Activate + else Same version + Note over PM: Skip, no change + end + else New profile + CFG->>NEW: Create + PM->>NEW: Activate + end + end + + deactivate CFG + + PM->>PM: Remove deleted profiles +``` + +## Threading Model + +### Thread Overview + +| Thread Name | Purpose | Stack Size | Priority | Wakeable | +|------------|---------|------------|----------|----------| +| Main | Initialization, cleanup | Default | Normal | - | +| Event Receiver | Process event queue | 32KB | High | Signal | +| XConf Fetcher | Fetch configurations | 64KB | Low | Timer | +| Scheduler | Trigger reports | 32KB | Normal | Timer | +| Report Sender | HTTP transmission | 64KB | Low | Queue | + +### Synchronization Primitives + +```c +// Profile list protection +static pthread_mutex_t g_profile_list_mutex = PTHREAD_MUTEX_INITIALIZER; + +// Connection pool protection +static pthread_mutex_t g_pool_mutex = PTHREAD_MUTEX_INITIALIZER; +static pthread_cond_t g_pool_cond = PTHREAD_COND_INITIALIZER; + +// Event queue protection +static pthread_mutex_t g_event_queue_mutex = PTHREAD_MUTEX_INITIALIZER; +static pthread_cond_t g_event_queue_cond = PTHREAD_COND_INITIALIZER; + +// Marker cache protection +static pthread_rwlock_t g_marker_cache_lock = PTHREAD_RWLOCK_INITIALIZER; + +// Per-profile protection +typedef struct { + pthread_mutex_t mutex; // Protects profile state + // ... +} profile_t; +``` + +### Lock Ordering Rules + +To prevent deadlocks, always acquire locks in this order: + +1. `g_profile_list_mutex` (global profile list) +2. `profile->mutex` (individual profile) +3. `g_pool_mutex` (connection pool) +4. `g_marker_cache_lock` (read or write) +5. `g_event_queue_mutex` (event queue) + +**Never hold multiple locks unless following this order!** + +## Memory Architecture + +### Memory Layout + +```mermaid +graph TB + subgraph "Static Memory (~256KB)" + A[Global State] + B[Thread Stacks] + C[Mutexes/Locks] + end + + subgraph "Dynamic Memory (~1MB typical)" + D[Profile Structures] + E[Marker Definitions] + F[Event Queue] + G[Report Buffers] + H[Connection Pool] + I[Configuration Cache] + end + + subgraph "Temporary Memory" + J[HTTP Buffers] + K[JSON Parser] + L[Data Model Queries] + end +``` + +### Memory Budget (Typical Configuration) + +| Component | Static | Dynamic | Notes | +|-----------|--------|---------|-------| +| Core system | 128 KB | - | Code, globals | +| Thread stacks | 160 KB | - | 5 threads × 32KB | +| Profiles (5) | - | 32 KB | ~6.5KB each | +| Connection pool | - | 2 KB | 3 connections | +| Event queue | - | 80 KB | 1000 events | +| Report buffer | - | 64 KB | Temporary | +| Configuration | - | 16 KB | Cached JSON | +| **Total** | **~288 KB** | **~194 KB** | **~512 KB RSS** | + +### Memory Management Strategy + +1. **Minimize Heap Fragmentation** + - Use memory pools for fixed-size allocations + - Batch allocations/deallocations + - Avoid frequent realloc + +2. **Bounded Resource Usage** + - Maximum profile count enforced + - Event queue with fixed size + - Connection pool with limits + +3. **Cleanup on Errors** + - Single-exit functions with goto cleanup + - RAII wrappers in C++ tests + - NULL pointer checks before access + +4. **Memory Leak Prevention** + - Every malloc/strdup paired with free + - Valgrind verification in CI + - Smart pointers in C++ tests + + +### Platform Abstraction + +- **Logging**: RDK logger wrapper with fallback to syslog +- **IPC**: RBUS preferred, D-Bus fallback +- **Storage**: /nvram with /tmp fallback +- **Threading**: POSIX threads +- **Networking**: libcurl (OpenSSL/mbedTLS) + +## Security Model + +### Transport Security + +- **mTLS**: Mutual TLS for client authentication +- **Certificate Management**: Integration with librdkcertselector +- **Certificate Rotation**: Automatic cert refresh on failure +- **Fallback**: Recovery certificate support + +### Data Protection + +- **Sensitive Data**: Filtered from reports (passwords, keys) +- **PII Handling**: Privacy control integration +- **Log Scrubbing**: Sensitive data not logged + +### Attack Surface Minimization + +- **Input Validation**: All external inputs validated +- **Buffer Overflow Protection**: Size-checked string operations +- **Integer Overflow**: Checked arithmetic +- **Resource Limits**: Bounded allocations + +## Performance Characteristics + +### CPU Usage + +| Scenario | CPU % | Notes | +|----------|-------|-------| + + +### Memory Usage + +| Scenario | RSS | Notes | +|----------|-----|-------| + +### Network Usage + +| Report Type | Size | Frequency | +|-------------|------|-----------| + + +## Error Handling Philosophy + +### Error Categories + +1. **Fatal Errors** - System cannot continue + - Initialization failure + - Critical resource exhaustion + - **Action**: Exit with error code + +2. **Recoverable Errors** - Operation failed but system continues + - Network timeout + - Single report failure + - **Action**: Log, retry, degrade gracefully + +3. **Warnings** - Unexpected but handled + - Configuration parsing issues + - Missing optional parameters + - **Action**: Log, use defaults + +### Recovery Strategies + +```c +// Retry with exponential backoff +int retry_count = 0; +int backoff_ms = 1000; + +while (retry_count < MAX_RETRIES) { + ret = send_report(report); + if (ret == 0) break; + + T2Warn("Report send failed (attempt %d): %d\n", + retry_count + 1, ret); + + usleep(backoff_ms * 1000); + backoff_ms *= 2; // Exponential backoff + retry_count++; +} + +if (ret != 0) { + // Cache for later retry + cache_report(report); +} +``` + +## See Also + +- [Component Documentation](../../source/docs/) - Detailed component docs +- [Threading Model](threading-model.md) - Thread safety details +- [API Reference](../api/public-api.md) - Public API +- [Build System](../integration/build-setup.md) - Build configuration +- [Testing Guide](../integration/testing.md) - Test procedures + +--- + +**Document Version**: 1.0 +**Last Updated**: March 2026 +**Reviewers**: Architecture Team diff --git a/source/bulkdata/profile.c b/source/bulkdata/profile.c index 5a0055c9..fe7d91fd 100644 --- a/source/bulkdata/profile.c +++ b/source/bulkdata/profile.c @@ -354,8 +354,7 @@ static void* CollectAndReport(void* data) struct timespec endTime; struct timespec elapsedTime; char* customLogPath = NULL; - - + int clockReturn = 0; T2ERROR ret = T2ERROR_FAILURE; if( profile->name == NULL || profile->encodingType == NULL || profile->protocol == NULL ) @@ -379,7 +378,7 @@ static void* CollectAndReport(void* data) T2Info("%s ++in profileName : %s\n", __FUNCTION__, profile->name); - clock_gettime(CLOCK_REALTIME, &startTime); + clockReturn = clock_gettime(CLOCK_MONOTONIC, &startTime); if( !strcmp(profile->encodingType, "JSON") || !strcmp(profile->encodingType, "MessagePack")) { JSONEncoding *jsonEncoding = profile->jsonEncoding; @@ -754,9 +753,16 @@ static void* CollectAndReport(void* data) { T2Error("Unsupported encoding format : %s\n", profile->encodingType); } - clock_gettime(CLOCK_REALTIME, &endTime); - getLapsedTime(&elapsedTime, &endTime, &startTime); - T2Info("Elapsed Time for : %s = %lu.%lu (Sec.NanoSec)\n", profile->name, (unsigned long )elapsedTime.tv_sec, elapsedTime.tv_nsec); + clockReturn |= clock_gettime(CLOCK_MONOTONIC, &endTime); + if(clockReturn) + { + T2Warning("Failed to get time from clock_gettime()"); + } + else + { + getLapsedTime(&elapsedTime, &endTime, &startTime); + T2Info("Elapsed Time for : %s = %lu.%lu (Sec.NanoSec)\n", profile->name, (unsigned long )elapsedTime.tv_sec, elapsedTime.tv_nsec); + } if(ret == T2ERROR_SUCCESS && jsonReport) { free(jsonReport); diff --git a/source/bulkdata/profilexconf.c b/source/bulkdata/profilexconf.c index 77acf20e..613eaec9 100644 --- a/source/bulkdata/profilexconf.c +++ b/source/bulkdata/profilexconf.c @@ -246,8 +246,8 @@ static void* CollectAndReportXconf(void* data) T2Info("%s ++in profileName : %s\n", __FUNCTION__, profile->name); } - - clock_gettime(CLOCK_REALTIME, &startTime); + int clockReturn = 0; + clockReturn = clock_gettime(CLOCK_MONOTONIC, &startTime); if(profile->encodingType != NULL && !strcmp(profile->encodingType, "JSON")) { if(T2ERROR_SUCCESS != initJSONReportXconf(&profile->jsonReportObj, &valArray)) @@ -303,13 +303,21 @@ static void* CollectAndReportXconf(void* data) encodeEventMarkersInJSON(valArray, profile->eMarkerList); } profile->grepSeekProfile->execCounter += 1; - T2Info("Execution Count = %d\n", profile->grepSeekProfile->execCounter); + T2Info("Xconf Profile Execution Count = %d\n", profile->grepSeekProfile->execCounter); ret = prepareJSONReport(profile->jsonReportObj, &jsonReport); destroyJSONReport(profile->jsonReportObj); profile->jsonReportObj = NULL; - clock_gettime(CLOCK_REALTIME, &endTime); - T2Info("Processing time for profile %s is %ld seconds\n", profile->name, (long)(endTime.tv_sec - startTime.tv_sec)); + clockReturn |= clock_gettime(CLOCK_MONOTONIC, &endTime); + if(clockReturn) + { + T2Warning("Failed to get time from clock_gettime()"); + } + else + { + T2Info("%s Xconf Profile Processing Time in seconds : %ld\n", profile->name, (long)(endTime.tv_sec - startTime.tv_sec)); + } + if(ret != T2ERROR_SUCCESS) { T2Error("Unable to generate report for : %s\n", profile->name); @@ -449,9 +457,17 @@ static void* CollectAndReportXconf(void* data) T2Warning("Failed to save grep config to file for profile: %s\n", profile->name); } #endif - clock_gettime(CLOCK_REALTIME, &endTime); - getLapsedTime(&elapsedTime, &endTime, &startTime); - T2Info("Elapsed Time for : %s = %lu.%lu (Sec.NanoSec)\n", profile->name, (unsigned long)elapsedTime.tv_sec, elapsedTime.tv_nsec); + clockReturn |= clock_gettime(CLOCK_MONOTONIC, &endTime); + if (clockReturn) + { + T2Warning("Failed to get time from clock_gettime()"); + } + else + { + getLapsedTime(&elapsedTime, &endTime, &startTime); + T2Info("Elapsed Time for : %s = %lu.%lu (Sec.NanoSec)\n", profile->name, (unsigned long)elapsedTime.tv_sec, elapsedTime.tv_nsec); + } + if(jsonReport) { free(jsonReport); diff --git a/source/protocol/http/Makefile.am b/source/protocol/http/Makefile.am index 460bfa2a..086ec630 100644 --- a/source/protocol/http/Makefile.am +++ b/source/protocol/http/Makefile.am @@ -21,7 +21,7 @@ AM_CFLAGS = lib_LTLIBRARIES = libhttp.la libhttp_la_SOURCES = curlinterface.c multicurlinterface.c -libhttp_la_LDFLAGS = -shared -fPIC -lcurl +libhttp_la_LDFLAGS = -shared -fPIC -lcurl -lcrypto if IS_LIBRDKCERTSEL_ENABLED libhttp_la_CFLAGS = $(LIBRDKCERTSEL_FLAG) libhttp_la_LDFLAGS += -lRdkCertSelector -lrdkconfig diff --git a/source/protocol/http/multicurlinterface.c b/source/protocol/http/multicurlinterface.c index 2d0113dc..bb2c26d8 100644 --- a/source/protocol/http/multicurlinterface.c +++ b/source/protocol/http/multicurlinterface.c @@ -33,6 +33,7 @@ #include #include #include +#include #include "multicurlinterface.h" #include "busInterface.h" #include "t2log_wrapper.h" @@ -279,6 +280,27 @@ T2ERROR init_connection_pool() CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_SSLVERSION, CURL_SSLVERSION_DEFAULT); CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_SSL_VERIFYPEER, 1L); + // Memory-bounding options for long-running daemon. + // MAXCONNECTS=1: limits cached connections per handle to one entry + // without this the internal connection cache silently accumulates SSL + // session objects whenever the target IP or cert rotates. + CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_MAXCONNECTS, 1L); + // SSL_SESSIONID_CACHE=0: disables the SSL session-ticket cache in the + // handle's SSL_CTX. + CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_SSL_SESSIONID_CACHE, 0L); + + // Bound DNS cache lifetime to match server reset interval. + // Load balancer IPs rotate; without this limit, stale IP->hostname + // mappings accumulate in the handle's DNS cache alongside stale + // connection objects — same class of problem as MAXCONNECTS. + CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_DNS_CACHE_TIMEOUT, 30L); + + // Limit the receive buffer to 8KB + CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_BUFFERSIZE, 8192L); + + // Limit the send buffer to 8KB (Requires libcurl 7.62.0+) + CURL_SETOPT_CHECK(pool_entries[i].easy_handle, CURLOPT_UPLOAD_BUFFERSIZE, 8192L); + #ifdef LIBRDKCERTSEL_BUILD // Initialize certificate selectors for each easy handle bool state_red_enable = false; @@ -662,6 +684,13 @@ T2ERROR http_pool_get(const char *url, char **response_data, bool enable_file_ou free(pCertPC); } #endif + ERR_clear_error(); + + // Release all OpenSSL per-thread state (ERR stack, cached allocations). + // Called at the end of the worker thread's HTTP operation to prevent + // thread-local memory from accumulating across the daemon lifetime. + OPENSSL_thread_stop(); + release_pool_handle(idx); return T2ERROR_FAILURE; } @@ -784,6 +813,13 @@ T2ERROR http_pool_get(const char *url, char **response_data, bool enable_file_ou // Important Note: When using LIBRDKCERTSEL_BUILD, pCertURI and pCertPC are owned by the // cert selector object and are freed when rdkcertselector_free() is called + // Clear OpenSSL per-thread error queue. + // Every curl_easy_perform() may push records onto the per-thread ERR_STATE + // list on any TLS error (cert verify failure, connection reset, timeout). + // Without this call the list grows unboundedly over the daemon lifetime. + // ERR_clear_error() is thread-safe since OpenSSL 1.1.0. + ERR_clear_error(); + release_pool_handle(idx); T2Debug("%s ++out\n", __FUNCTION__); @@ -1059,6 +1095,11 @@ T2ERROR http_pool_post(const char *url, const char *payload) pCertPC = NULL; } #endif + + // Release all OpenSSL per-thread state at the end of the worker thread's + // HTTP operation (see http_pool_get for rationale) + OPENSSL_thread_stop(); + // Note: When using LIBRDKCERTSEL_BUILD, pCertURI and pCertPC are owned by the // cert selector object and are freed when rdkcertselector_free() is called release_pool_handle(idx); diff --git a/source/test/bulkdata/Makefile.am b/source/test/bulkdata/Makefile.am index b4413819..f89267d4 100644 --- a/source/test/bulkdata/Makefile.am +++ b/source/test/bulkdata/Makefile.am @@ -59,7 +59,7 @@ profile_gtest_bin_CPPFLAGS = -I$(PKG_CONFIG_SYSROOT_DIR)/usr/include/gtest -I$(P profile_gtest_bin_SOURCES = gtest_main.cpp ../mocks/SystemMock.cpp ../mocks/FileioMock.cpp ../mocks/rdklogMock.cpp ../mocks/rbusMock.cpp ../mocks/rdkconfigMock.cpp ../mocks/VectorMock.cpp SchedulerMock.cpp profileMock.cpp ../../bulkdata/profile.c profileTest.cpp ../../utils/persistence.c ../../utils/t2common.c ../../utils/t2collection.c ../../utils/t2MtlsUtils.c ../../utils/t2log_wrapper.c ../../dcautil/dcautil.c ../../dcautil/dca.c ../../dcautil/legacyutils.c ../../dcautil/dcaproc.c ../../xconf-client/xconfclient.c ../../protocol/rbusMethod/rbusmethodinterface.c ../../privacycontrol/rdkservices_privacyutils.c ../../reportgen/reportgen.c ../../bulkdata/t2eventreceiver.c ../../bulkdata/t2markers.c ../../t2parser/t2parser.c ../../bulkdata/datamodel.c ../../t2parser/t2parserxconf.c ../../bulkdata/reportprofiles.c ../../bulkdata/profilexconf.c ../../ccspinterface/rbusInterface.c ../../ccspinterface/busInterface.c ../../protocol/http/curlinterface.c ../../protocol/http/multicurlinterface.c -profile_gtest_bin_LDFLAGS = -L/usr/src/googletest/googletest/lib/.libs -lgcov -L/src/googletest/googlemock/lib -L/usr/src/googletest/googlemock/lib/.libs -L/usr/include/glib-2.0 -lgmock -lcjson -lcurl -lmsgpackc -lgtest -lgtest_main -lglib-2.0 +profile_gtest_bin_LDFLAGS = -L/usr/src/googletest/googletest/lib/.libs -lgcov -L/src/googletest/googlemock/lib -L/usr/src/googletest/googlemock/lib/.libs -L/usr/include/glib-2.0 -lgmock -lcjson -lcurl -lcrypto -lmsgpackc -lgtest -lgtest_main -lglib-2.0 profile_gtest_bin_LDFLAGS += -Wl,--wrap=sendReportOverHTTP -Wl,--wrap=sendCachedReportsOverHTTP @@ -69,7 +69,7 @@ reportprofiles_gtest_bin_CPPFLAGS = -I$(PKG_CONFIG_SYSROOT_DIR)/usr/include/gtes reportprofiles_gtest_bin_SOURCES = gtest_main.cpp ../mocks/SystemMock.cpp ../mocks/FileioMock.cpp ../mocks/rdklogMock.cpp ../mocks/rbusMock.cpp ../mocks/rdkconfigMock.cpp ../mocks/VectorMock.cpp SchedulerMock.cpp reportprofileMock.cpp ../../bulkdata/reportprofiles.c reportprofilesTest.cpp ../../utils/persistence.c ../../utils/t2common.c ../../utils/t2collection.c ../../utils/t2MtlsUtils.c ../../utils/t2log_wrapper.c ../../dcautil/dcautil.c ../../dcautil/dca.c ../../dcautil/legacyutils.c ../../dcautil/dcaproc.c ../../xconf-client/xconfclient.c ../../protocol/rbusMethod/rbusmethodinterface.c ../../privacycontrol/rdkservices_privacyutils.c ../../reportgen/reportgen.c ../../bulkdata/t2eventreceiver.c ../../bulkdata/t2markers.c ../../t2parser/t2parser.c ../../bulkdata/datamodel.c ../../t2parser/t2parserxconf.c ../../bulkdata/profile.c ../../bulkdata/profilexconf.c ../../ccspinterface/rbusInterface.c ../../ccspinterface/busInterface.c ../../protocol/http/curlinterface.c ../../protocol/http/multicurlinterface.c -reportprofiles_gtest_bin_LDFLAGS = -L/usr/src/googletest/googletest/lib/.libs -L/usr/src/googletest/googlemock/lib/.libs -lgmock -lgtest -lpthread -lcjson -lmsgpackc -lglib-2.0 -lrt -lcurl +reportprofiles_gtest_bin_LDFLAGS = -L/usr/src/googletest/googletest/lib/.libs -L/usr/src/googletest/googlemock/lib/.libs -lgmock -lgtest -lpthread -lcjson -lmsgpackc -lglib-2.0 -lrt -lcurl -lcrypto reportprofiles_gtest_bin_LDFLAGS += -Wl,--wrap=isRbusEnabled -Wl,--wrap=sendReportOverHTTP -Wl,--wrap=sendCachedReportsOverHTTP diff --git a/source/test/dcautils/Makefile.am b/source/test/dcautils/Makefile.am index 51c162cc..455e6779 100644 --- a/source/test/dcautils/Makefile.am +++ b/source/test/dcautils/Makefile.am @@ -33,4 +33,4 @@ dcautil_gtest_bin_SOURCES = gtest_main.cpp ../mocks/SystemMock.cpp ../mocks/File #dcautil_gtest_bin_SOURCES = gtest_main.cpp ../mocks/SystemMock.cpp ../mocks/FileioMock.cpp ../mocks/rdklogMock.cpp ../mocks/rbusMock.cpp ../xconf-client/xconfclientTest.cpp ../xconf-client/xconfclientMock.cpp ../mocks/rdkconfigMock.cpp ../../utils/persistence.c ../../utils/t2common.c ../../utils/t2collection.c ../../utils/vector.c ../../utils/t2MtlsUtils.c ../../utils/t2log_wrapper.c ../../dcautil/dcautil.c ../../dcautil/dca.c ../../dcautil/legacyutils.c ../../dcautil/dcaproc.c ../../commonlib/telemetry_busmessage_sender.c ../../xconf-client/xconfclient.c ../../protocol/http/curlinterface.c ../../protocol/rbusMethod/rbusmethodinterface.c ../../privacycontrol/rdkservices_privacyutils.c ../../reportgen/reportgen.c ../../t2parser/t2parser.c -dcautil_gtest_bin_LDFLAGS = -L/usr/src/googletest/googletest/lib/.libs -lgcov -L/src/googletest/googlemock/lib -L/usr/src/googletest/googlemock/lib/.libs -L/usr/include/glib-2.0 -lgmock -lcjson -lcurl -lmsgpackc -lgtest -lgtest_main -lglib-2.0 +dcautil_gtest_bin_LDFLAGS = -L/usr/src/googletest/googletest/lib/.libs -lgcov -L/src/googletest/googlemock/lib -L/usr/src/googletest/googlemock/lib/.libs -L/usr/include/glib-2.0 -lgmock -lcjson -lcurl -lcrypto -lmsgpackc -lgtest -lgtest_main -lglib-2.0