You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -68,17 +68,21 @@ To support distribution as a package, the structure follows a Modular Agent patt
68
68
-**`orchestrator`:** The manager acting as the master controller. Responsible for managing state, handing off tasks between specialist agents, and maintaining the continuous improvement loop.
69
69
-**`auditor`:** The evaluator consuming the application's source code and scoring it against **Evaluation Metrics**.
70
70
- Input: Application's source code
71
-
- Task: Score the project based on evaluation metrics
72
-
- Logic: Use a weighted formula to calculate an alignment index
71
+
- Task: Score the project based on evaluation metrics (Consumer Implementation and Design System Enablement)
72
+
- Logic: Use a weighted formula to calculate separate Consumer Alignment and System Enablement scores, then a combined rollup score
73
+
- Output: Structured JSON scorecard with raw scores, status (Pass/Warning/Fail per metric), and evidence citations
73
74
-**`strategist`:** The prioritizer taking the audit findings and prioritizing by impact and effort.
74
-
- Input: The Auditor's report
75
-
- Task: Categorize issues into "Consumer Implementation" (e.g., "You used a hex code") vs. "System Improvement" (e.g., "The system lacks a pattern for this specific dashboard view"), then rank by impact and effort.
75
+
- Input: The Auditor's scorecard report
76
+
- Task: Categorize issues into "Consumer Implementation" (e.g., "You used a hex code") vs. "System Improvement" (e.g., "The system lacks a pattern for this specific dashboard view"), then rank by impact and effort using the Impact/Effort matrix
77
+
- Output: Game Plan with top 3 Consumer recommendations and P1/P2/P3 System Delivery Suggestions
76
78
-**`engineer`:** The remediator implementing top priorities.
77
-
- Input: Top priority recommendations
78
-
- Task: Executes the "fix." Modifies the user's local files to replace legacy code with design system tokens, components, and patterns.
79
+
- Input: Top priority recommendations from the Strategist
80
+
- Task: Executes the "fix" by modifying the user's local files to replace legacy code with design system tokens, components, and patterns
81
+
- Output: Code diffs for user approval before writing to disk
79
82
-**`reporter`:** The telemetry collector silently observing the ecosystem. Responsible for taking granular data from the Auditor and Strategist and compressing into high-level insights the HPE Design System can use to make roadmap and funding decisions.
80
-
- Input: Data from Auditor and Strategist
81
-
- Task: Summarize adoption, points of friction, and ROI. Delivers
83
+
- Input: Data from Auditor (scorecard metrics, evidence) and Strategist (system gaps, impact/effort rankings)
84
+
- Task: Aggregate adoption trends, friction points, and ROI signals across the 50+ consuming teams
85
+
- Output: Monthly/quarterly telemetry dashboard and system roadmap recommendations
82
86
83
87
## Implementation Strategy: The Continuous Improvement Loop
84
88
@@ -96,49 +100,255 @@ This is the blueprint for how the CLI package executes. It visualizes the "hands
96
100
97
101
#### 1. Initiation Phase
98
102
-**User/CI:** Executes `hpe-ds-ai audit --fix`.
99
-
-**Orchestrator:** Loads `.hpedsrc` config and fetches the latest `knowledge/` (Tokens, Components, Patterns).
100
-
-**Orchestrator → Auditor:** Sends the file path and DS knowledge. "*Analyze this*."
103
+
-**Orchestrator:** Loads `.hpedsrc` config (see Configuration section below) and fetches the latest `knowledge/` (Tokens, Components, Patterns).
104
+
-**Orchestrator → Auditor:** Sends the file path, scope, detected framework, and DS knowledge. "*Analyze this*."
-**Auditor → Orchestrator:** Returns a structured JSON report (The Evaluation Metric Scorecard).
107
+
-**Auditor:** Performs AST parsing, regex scans, and static analysis.
108
+
-**Auditor → Orchestrator:** Returns a structured JSON report (The Evaluation Metric Scorecard) with Consumer Alignment Score, System Enablement Score, Combined Alignment Score, and classified findings.
105
109
-**Orchestrator → Strategist:** Sends the Scorecard. "*What should we do first?*"
106
110
107
111
#### 3. Strategy Phase
108
-
-**Strategist:** Runs the $Impact/Effort$ matrix.
109
-
-**Strategist → Orchestrator:** Returns the "Game Plan" (batched remediation tasks).
110
-
-**Orchestrator → User:** Displays the Scorecard and the proposed fixes.
111
-
-**User:** Input `[Y]` to approve the top 3 critical fixes.
112
+
-**Strategist:** Runs the Impact/Effort matrix on Consumer findings and assigns P1/P2/P3 severity to System Delivery Suggestions.
113
+
-**Strategist → Orchestrator:** Returns the "Game Plan" (batched Consumer remediation tasks + prioritized System gaps).
114
+
-**Orchestrator → User:** Displays the Scorecard (both Consumer and System scores), improvement delta (if re-audit), and top 3 proposed Consumer fixes.
115
+
-**User:** Input `[Y]` to approve the top 3 critical Consumer fixes.
112
116
113
117
#### 4. Remediation Phase
114
-
-**Orchestrator → Engineer:** Sends the specific files and approved tasks. "*Fix these 3 items.*"
115
-
-**Engineer:** Generates code diffs, ensuring A11y and Token compliance.
116
-
-**Engineer → User:** Displays the `diff` for review.
118
+
-**Orchestrator → Engineer:** Sends the specific files, approved tasks, and context. "*Fix these 3 items.*"
119
+
-**Engineer:** Generates code diffs, ensuring A11y and Token compliance per HPEDS standards.
120
+
-**Engineer → User:** Displays the `diff` for review with rationale.
117
121
-**User:** Input `[Y]` to write changes to disk.
118
122
119
123
#### 5. Verification Phase
120
124
-**Orchestrator → Auditor:** Sends the newly modified code. "*Verify the improvement.*"
-**Auditor:** Re-scores the evaluation metrics across both Consumer and System dimensions.
126
+
-**Orchestrator → User:** Displays the improvement delta (e.g., "Consumer Score: 0.45 → 0.72; System Enablement: 0.60 → 0.65").
123
127
124
128
#### 6. External Reporting
125
-
-**Orchestrator:** If a "System Gap" was found, it generates the**System Delivery Ticket**.
126
-
-**Orchestrator:** Sends telemetry to your organization's central dashboard.
129
+
-**Orchestrator:** If P1 "System Gap" findings were identified, it generates a**System Delivery Ticket** (see System Delivery Suggestion severity rules in auditor instructions).
130
+
-**Orchestrator:** Sends telemetry to your organization's central dashboard (adoption rate, metric trends, friction points).
127
131
128
132
133
+
## Configuration
134
+
135
+
### `.hpedsrc` file discovery and setup
136
+
137
+
The Orchestrator looks for `.hpedsrc` in the following order (first match wins):
138
+
1. Root of the repository (`./`)
139
+
2. Root of the monorepo workspace (if applicable)
140
+
3. User's home directory (`~/.hpedsrc`) as a fallback for global defaults
141
+
142
+
If no `.hpedsrc` is found, the Orchestrator prompts interactively for `framework` and `scope`, then caches the response in the repo root.
143
+
144
+
**Recommendation:** Commit `.hpedsrc` to version control so all team members use consistent audit settings.
145
+
146
+
### `.hpedsrc` file
147
+
The `.hpedsrc` file is a JSON or YAML configuration file in the root of the consuming application that tells the Orchestrator how to run audits. The Orchestrator loads this on initiation.
148
+
149
+
**Required fields:**
150
+
-`framework`: The application's UI framework (e.g., `"react"`, `"vue"`, `"angular"`). Used by Auditor and Engineer to select framework-specific skills.
151
+
-`scope`: The default audit scope (e.g., `"src/"`, or specific directory like `"src/pages/dashboard/"`).
152
+
153
+
**Optional fields:**
154
+
-`feedback_collection`: Whether to collect team feedback signals via CLI prompt (default: `true`).
155
+
-`auto_apply_fixes`: If `true`, Engineer automatically applies non-critical fixes; if `false`, all fixes require user approval (default: `false`).
156
+
-`telemetry_endpoint`: URL for sending Reporter telemetry (default: to HPE Design System telemetry service).
157
+
158
+
**Example:**
159
+
```json
160
+
{
161
+
"framework": "react",
162
+
"scope": "src/",
163
+
"feedback_collection": true,
164
+
"auto_apply_fixes": false
165
+
}
166
+
```
167
+
168
+
## Framework Support
169
+
170
+
The Auditor and Engineer support multiple frameworks via modular skills. Framework detection:
171
+
1. Check `.hpedsrc``framework` field (highest priority).
172
+
2. Auto-detect from `package.json` dependencies if not specified.
173
+
3. Prompt user if detection fails.
174
+
175
+
**Supported frameworks:** React (primary, 80% of users), Vue, Angular, and others via pluggable skill modules.
176
+
177
+
### Framework Coverage & Roadmap
178
+
179
+
| Framework | Support Status | Auditor | Engineer | Notes |
180
+
| --- | --- | --- | --- | --- |
181
+
| React | ✅ Stable | Full scoring | Full remediation | Primary platform; most patterns/components built for React |
182
+
| Vue | ✅ Beta | Full scoring*| Full remediation*|*Component binding syntax differs; Engineer generates Vue 3 composition API |
183
+
| Angular | ✅ Beta | Full scoring*| Full remediation*|*TypeScript-first; DX metrics heavily weighted toward Angular idioms |
| Next.js | ✅ Included | As React | As React | File-route conventions auto-detected |
186
+
| Nuxt | ✅ Included | As Vue | As Vue | File-route conventions auto-detected |
187
+
188
+
**Unsupported framework fallback:** If a framework is not listed, the Orchestrator falls back to React skill set with a warning: "Framework not natively supported; using React conventions. Some metrics may be inaccurate."
189
+
190
+
**Framework-specific skill loading:** The Orchestrator detects framework from `.hpedsrc` or `package.json` and loads the corresponding `auditor/skills-{framework}.md` and `engineer/skills-{framework}.md` modules.
When the Auditor identifies a **P1 System Gap** (critical gap in HPEDS capabilities required by consuming teams), the Orchestrator automatically creates a **System Delivery Ticket** in the HPEDS roadmap system.
205
+
206
+
### Ticket creation trigger
207
+
A System Delivery Suggestion is escalated to P1 (and triggers a ticket) when:
208
+
- Multiple teams (2+) report the same gap independently, OR
209
+
- A single team flags it as blocking critical feature delivery, AND
210
+
- No existing or planned HPEDS offering addresses the gap.
211
+
212
+
### Ticket destination
213
+
-**Default:** GitHub Issues in the HPEDS repository (`/hpe-design-system/issues`)
214
+
-**Override:** Specify `system_delivery_ticket_endpoint` in `.hpedsrc` to route to external tracking (Jira, Azure DevOps, etc.)
215
+
216
+
### Ticket structure
217
+
```json
218
+
{
219
+
"title": "System Delivery Gap: [Gap name] (P1)",
220
+
"body": "Reported by [N] team(s): [Team A], [Team B]\n\nGap description: [evidence from audits]\n\nConsumer impact: [how many teams affected]\n\nProposed solution: [Strategist recommendation]",
-**Improvement deltas:** Score changes over time (e.g., 0.45 → 0.75)
241
+
-**System gaps:** P1/P2/P3 counts and categories
242
+
243
+
### What is NOT collected
244
+
- Source code or implementation details
245
+
- Company/team identifiers (unless explicitly opted-in for trend analysis)
246
+
- Personal developer names or commit history
247
+
- Proprietary business logic or secrets
248
+
249
+
### Opt-out and privacy controls
250
+
Teams can disable telemetry collection by setting in `.hpedsrc`:
251
+
```json
252
+
{
253
+
"telemetry_enabled": false
254
+
}
255
+
```
137
256
257
+
When disabled:
258
+
- Reporter does not POST to `telemetry_endpoint`
259
+
- Local audit scores are still computed and displayed
260
+
- System Delivery Tickets for P1 gaps are still created (essential for HPEDS roadmap)
261
+
262
+
### Data retention
263
+
Telemetry is retained for 12 months rolling window, then anonymized and aggregated into quarterly trends. Teams can request deletion of their telemetry data by contacting the HPEDS team.
264
+
265
+
### Responsible use commitment
266
+
The HPE Design System team uses telemetry solely to:
- Identify and resolve friction points in developer workflow
270
+
271
+
Telemetry is not used for individual developer performance metrics or organizational surveillance.
138
272
139
273
### Enterprise benefits
140
-
-**Asynchronous audits:** Teams can run the Auditor in their PRs without ever running the Engineer (passive monitoring).
141
-
-**Modular skills:** If we decide to support a new framework (e.g., Vue), we only need to update the `skills.md` for the **Auditor** and **Engineer**. The Orchestrator and Strategist logic remains exactly the same.
142
-
-**Traceability:** Every code change made by the AI is linked back to a specific metric violation found by the Auditor.
274
+
-**Asynchronous audits:** Teams can run the Auditor in their PRs without ever running the Engineer (passive monitoring mode). Useful for visibility without mandatory remediation.
275
+
-**Modular skills:** If we decide to support a new framework (e.g., Svelte), we only need to update the `skills.md` for the **Auditor** and **Engineer**. The Orchestrator, Strategist, and Reporter logic remains exactly the same.
276
+
-**Traceability:** Every code change made by the Engineer is linked back to a specific metric violation found by the Auditor, with evidence citations (file path, line range, matched knowledge artifact).
277
+
-**System-level feedback loop:** Reporter collects adoption and friction signals from 50+ teams, enabling the HPE Design System team to prioritize gaps (P1 tickets) and deprecations.
278
+
279
+
## Troubleshooting
280
+
281
+
### "Knowledge sync failed" error
282
+
**Symptom:** Auditor reports "Could not fetch latest knowledge/components."
283
+
284
+
**Cause:** Orchestrator cannot reach the knowledge repository (network issue or endpoint stale).
0 commit comments