Add HashiCorp Vault integration for multi-cloud secret management#406
Add HashiCorp Vault integration for multi-cloud secret management#406
Conversation
New microservice for multi-tenant SaaS management: Tenant Management: - CRUD operations for health plan tenants - Automatic tenant ID generation (e.g., blueshield-ca-a1b2c3d4) - Status management (pending, active, suspended, terminated) - Multi-tier support (Starter, Professional, Enterprise) - Configuration management (modules, clearinghouse settings) API Key Management: - Cryptographically secure key generation (cho_xxxx...) - SHA256 hashing (never store plain-text) - Scoped permissions (claims:read, claims:write) - Expiration support and revocation - Last used tracking Usage Tracking: - Monthly metrics (claims, prior auths, eligibility, API calls) - Automatic monthly reset - Storage tracking (GB) - Tier limit enforcement ready Stripe Billing Integration: - Customer creation in Stripe - Subscription managemen New microservice for multi-tenant SaaS management: Tenant Management: - CRUD operations for health plan tenants - Automatic tenant ID generation (e.g., blueshield-ca-a1b2c3d4) - Status managemets Tenant Management: - CRUD operations for health eme- CRUD operationsan- Automatic tenant ID generation (e.g., g - Status management (pending, active, suspended, terminated) - -- Multi-tier support (Starter, Professional, Enterprise) - .n- Configuration management (modules, clearinghouse settym API Key Management: - Cryptographically secure key generatock- Cryptographicallau- SHA256 hashing (never store plain-text) - Scoped perct- Scoped permissions (claims:read, claimxt- Expiration support and revocation - Last usedflow
…ouse, Portals, Analytics, Mobile
- X12 834 parser (Node.js) for benefit enrollment EDI files - Enrollment import service (.NET 8) with Cosmos DB integration - Argo CronWorkflow for automated SFTP processing - Kubernetes deployment with HPA (2-10 replicas) - Sample 834 test file and comprehensive documentation Handles member additions, changes, and terminations with multi-tenant isolation.
- Add validation for test-x12-834-enrollment-sample.edi - Add microservices structure validation (enrollment-import-service) - Add Node.js container validation (x12-834-parser) - Add Argo Workflow validation (x12-834-enrollment-import.yaml) - Update test summary to reflect new validations
- Add 834 transaction type definition - Accept 'BE' functional identifier for enrollment (in addition to 'HI' for claims) - Update parameter validation to include 834 - Update documentation to reflect 834 support Fixes smoke test failures when validating test-x12-834-enrollment-sample.edi
- Add exclusions for code comments about validation (exists, verify, validate, check, ensure) - Allow internal Kubernetes cluster URLs (svc.cluster.local, localhost) for UnencryptedPHI check - These are not actual PHI exposures - just metadata and internal service communication Resolves false positive smoke test failures in security scan.
- Add WebSocket support to portal ingress (proxy upgrade headers) - Enable sticky sessions with cookie-based affinity (required for Blazor Server with multiple replicas) - Add nginx ConfigMap with WebSocket upgrade header mapping - Increase session affinity timeout to 3 hours for long-running Blazor sessions - Configure proxy buffers for SignalR message handling Fixes WebSocket connection failures: 'No Connection with that ID' 404 errors
WebSocket support is still enabled via: - websocket-services annotation - proxy-http-version 1.1 - nginx ConfigMap with upgrade header mapping
…bbing-service/azure/storage-blob-12.30.0
…vices/claims-scrubbing-service/azure/storage-blob-12.30.0 Bump @azure/storage-blob from 12.29.1 to 12.30.0 in /services/claims-scrubbing-service
- Replace Unicode bullets with ASCII hyphens - Escape quotes in verbatim string to fix RZ1000 unterminated string literal error - Fixes Docker build failure in portal image
…numerable conversions, resolve ClaimDetails naming conflict
…mportService namespace
…d->OnFilesChanged, remove MudDialog.Show())
…for Cosmos DB consistency
- Document 834 enrollment import service deployment - Document 837 claims service deployment - Include Cosmos DB container configuration - Add System.Text.Json serialization patterns - Document partition key strategy (/id) - Include test payloads and port-forwarding instructions - Add troubleshooting guide for common issues - Update service status: enrollment-import-service and claims-service deployed and tested
- Add Cosmos DB integration to Core Capabilities section - Document 834 enrollment import service (deployed and tested) - Document 837 claims service (deployed and tested) - Add Cosmos DB persistence to Production-Ready Features - Add documentation section with link to COSMOS-DB-DEPLOYMENT.md - Highlight successful multi-tenant isolation and System.Text.Json serialization
- Successfully tested 837I institutional claim (,650 hospital admission) - Add test-claim-institutional-payload.json with 5 revenue-coded service lines - Document both 837P and 837I claim types with sample payloads - Include revenue codes (0200 ICU, 0730 ECG, 0481 cardiac cath, 0300 lab, 0250 pharmacy) - Verify institutional claims persist correctly in Cosmos DB - Update test commands section to include both claim types
Co-authored-by: aurelianware <194855645+aurelianware@users.noreply.github.com>
Co-authored-by: aurelianware <194855645+aurelianware@users.noreply.github.com>
…itch deck Co-authored-by: aurelianware <194855645+aurelianware@users.noreply.github.com>
…ucture Co-authored-by: aurelianware <194855645+aurelianware@users.noreply.github.com>
…on and documentation Co-authored-by: aurelianware <194855645+aurelianware@users.noreply.github.com>
- Update npm packages to fix 35 vulnerabilities (ajv, minimatch, eslint) - Add CodeQL security scanning workflow for JS/TS/C# - Enhance Trivy scanning with failure on critical/high severity - Remove error suppression from npm audit checks - Create symlinks for smoke test prerequisites - Configure security overrides in package.json All security scans now passing with 0 vulnerabilities.
- Improves coverage from 9.3% to 95.34% (statements) - 45 new test cases covering mapX12837ToFhirClaim, mapX12835ToFhirEob, mapX12278ToFhirPriorAuth, batch processing - Tests all claim types (professional, institutional, dental) - Validates date formatting, service lines, diagnoses, adjustments, identifiers - Handles edge cases (empty arrays, optional fields, error conditions) - Overall coverage improved to 88.34% statements, 93.37% functions
…ocs, APIs, Pricing, Legal pages - Dual navigation for authenticated vs public users - Fix README pricing and add FHIR API section
|
@claude can you fix the checks and deployment |
There was a problem hiding this comment.
Pull request overview
This PR introduces HashiCorp Vault as a multi-cloud secrets backend (with Azure Key Vault as a fallback), adding a shared .NET configuration provider plus supporting infrastructure (Helm/Bicep), automation scripts, workflow updates, and documentation.
Changes:
- Add a shared
IConfigurationProviderimplementation to load/refresh secrets from Vault via Kubernetes or AppRole auth. - Add Vault deployment artifacts: Helm values for Kubernetes HA Raft and an Azure Container Instances (ACI) option via Bicep, wired through a new
secretProviderparameter. - Add automation (Vault setup + package injection) and expand docs/security guidance for Vault usage.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 15 comments.
Show a summary per file
| File | Description |
|---|---|
src/shared/Configuration/VaultConfigurationExtensions.cs |
Adds Vault-backed configuration source/provider with optional periodic reload. |
src/shared/Configuration/README.md |
Documents app integration steps and expected secret structure. |
scripts/setup-vault.sh |
Automates Vault init/unseal/auth/policy setup and initial secret seeding. |
scripts/add-vault-packages.sh |
Adds VaultSharp packages to all microservice .csproj files. |
infra/vault/values.yaml |
Helm values for Vault HA Raft + TLS + injector + PDB/affinity. |
infra/modules/vault-aci.bicep |
Bicep module to run Vault in Azure Container Instances with Azure Files persistence. |
infra/main.bicep |
Adds secretProvider selector and conditional Vault/Key Vault deployment. |
infra/k8s/vault-serviceaccount.yaml |
ServiceAccounts + token review binding + Vault config ConfigMap for Kubernetes auth. |
docs/security/HASHICORP-VAULT-ISSUE-SUMMARY.md |
High-level implementation summary and operational checklist. |
docs/security/HASHICORP-VAULT-INTEGRATION.md |
Detailed Vault deployment/integration guide. |
SECURITY.md |
Updates secrets management section to include Vault guidance. |
.github/workflows/deploy.yml |
Adds Vault→Key Vault→GitHub Secrets retrieval flow for deployment secrets. |
| // Determine the path format (with or without /data/ prefix) | ||
| var secretPath = _source.BasePath.Contains("/data/") | ||
| ? _source.BasePath.Replace("/data/", "/") | ||
| : _source.BasePath; | ||
|
|
||
| // List all secrets under the base path | ||
| var listResult = await _source.Client.V1.Secrets.KeyValue.V2.ReadSecretPathsAsync( | ||
| secretPath, | ||
| mountPoint: "secret"); | ||
|
|
There was a problem hiding this comment.
Vault KV v2 paths are being passed with a "secret/..." prefix while also hard-coding mountPoint: "secret". With VaultSharp, the V2 path is typically relative to the mount (e.g., "cloudhealthoffice"), so this will likely call "secret/metadata/secret/..." and fail to list/read secrets. Consider splitting mount point from the configured base path (default mountPoint="secret"), and normalize BasePath to be mount-relative (strip leading "secret/" and "secret/data/").
| catch (Exception ex) | ||
| { | ||
| Console.WriteLine($"❌ Failed to load secrets from Vault: {ex.Message}"); | ||
| Console.WriteLine($" {ex.GetType().Name}: {ex.StackTrace?.Split('\n').FirstOrDefault()?.Trim()}"); | ||
| } | ||
|
|
||
| Data = data; | ||
| OnReload(); | ||
| } |
There was a problem hiding this comment.
LoadAsync always assigns Data = data even when the Vault load fails, which clears previously loaded secrets on transient outages. This contradicts the “continues with existing secrets” behavior described in docs. Preserve the previous Data on failure (only overwrite Data when a full load succeeds), or merge new values into existing Data.
| # Configure Kubernetes auth | ||
| vault write auth/kubernetes/config \ | ||
| kubernetes_host="$KUBERNETES_HOST" \ | ||
| kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ | ||
| token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token \ |
There was a problem hiding this comment.
This writes auth/kubernetes/config using kubernetes_ca_cert=@/var/run/secrets/... and token_reviewer_jwt=@/var/run/secrets/..., but the command is executed on the operator machine (via port-forward), where those file paths typically don’t exist. Run this vault write from داخل a Vault pod (kubectl exec) or fetch the SA CA/token via kubectl and pass the contents explicitly; otherwise Kubernetes auth setup will fail.
| # Configure Kubernetes auth | |
| vault write auth/kubernetes/config \ | |
| kubernetes_host="$KUBERNETES_HOST" \ | |
| kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ | |
| token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token \ | |
| # Determine Vault namespace and service account used for Kubernetes auth | |
| VAULT_NAMESPACE="${VAULT_NAMESPACE:-vault}" | |
| VAULT_SA_NAME="${VAULT_SA_NAME:-vault}" | |
| # Fetch service account secret containing token and CA certificate | |
| SA_SECRET_NAME=$(kubectl get sa "${VAULT_SA_NAME}" -n "${VAULT_NAMESPACE}" -o jsonpath='{.secrets[0].name}') | |
| SA_CA_CRT=$(kubectl get secret "${SA_SECRET_NAME}" -n "${VAULT_NAMESPACE}" -o jsonpath='{.data.ca\.crt}' | base64 --decode) | |
| SA_TOKEN=$(kubectl get secret "${SA_SECRET_NAME}" -n "${VAULT_NAMESPACE}" -o jsonpath='{.data.token}' | base64 --decode) | |
| # Configure Kubernetes auth using explicit CA cert and token contents | |
| vault write auth/kubernetes/config \ | |
| kubernetes_host="$KUBERNETES_HOST" \ | |
| kubernetes_ca_cert="$SA_CA_CRT" \ | |
| token_reviewer_jwt="$SA_TOKEN" \ |
| # Port forward to Vault (in background) | ||
| kubectl port-forward -n vault svc/vault 8200:8200 > /dev/null 2>&1 & | ||
| PORT_FORWARD_PID=$! | ||
| sleep 3 | ||
|
|
||
| # Enable audit logging | ||
| print_info "Enabling audit logging..." | ||
| vault audit enable file file_path=/vault/audit/audit.log || print_warning "Audit already enabled" | ||
| print_success "Audit logging enabled" | ||
|
|
||
| # Enable secrets engines | ||
| print_info "Enabling secrets engines..." | ||
| vault secrets enable -path=secret kv-v2 || print_warning "KV v2 already enabled" | ||
| vault secrets enable transit || print_warning "Transit already enabled" | ||
| print_success "Secrets engines enabled" | ||
|
|
||
| # Create encryption key for PHI | ||
| print_info "Creating PHI encryption key..." | ||
| vault write -f transit/keys/phi-encryption || print_warning "Encryption key already exists" | ||
| print_success "PHI encryption key created" | ||
|
|
||
| # Enable authentication methods | ||
| print_info "Enabling authentication methods..." | ||
| configure_kubernetes_auth | ||
| configure_approle_auth | ||
|
|
||
| # Kill port forward | ||
| kill $PORT_FORWARD_PID 2>/dev/null || true |
There was a problem hiding this comment.
Port-forward is started in the background and only killed on the success path. If any command fails after starting it (set -e), the background process can be left running. Add a trap to always kill $PORT_FORWARD_PID on EXIT/ERR once it’s set.
| # Find the last PackageReference line and add after it | ||
| if grep -q "<PackageReference" "$CSPROJ_FILE"; then | ||
| # Use sed to add VaultSharp reference after the last PackageReference | ||
| sed -i '/<\/ItemGroup>/i \ <PackageReference Include="VaultSharp" Version="'"$VAULTSHARP_VERSION"'" />\n <PackageReference Include="VaultSharp.Extensions.Configuration" Version="'"$VAULTSHARP_VERSION"'" />' "$CSPROJ_FILE" |
There was a problem hiding this comment.
This script uses sed -i without a backup suffix, which works on GNU sed but fails on macOS/BSD sed. Other repo scripts use sed -i.bak and remove the backup afterwards; consider matching that pattern here for portability.
| sed -i '/<\/ItemGroup>/i \ <PackageReference Include="VaultSharp" Version="'"$VAULTSHARP_VERSION"'" />\n <PackageReference Include="VaultSharp.Extensions.Configuration" Version="'"$VAULTSHARP_VERSION"'" />' "$CSPROJ_FILE" | |
| sed -i.bak '/<\/ItemGroup>/i \ <PackageReference Include="VaultSharp" Version="'"$VAULTSHARP_VERSION"'" />\n <PackageReference Include="VaultSharp.Extensions.Configuration" Version="'"$VAULTSHARP_VERSION"'" />' "$CSPROJ_FILE" | |
| rm "$CSPROJ_FILE.bak" |
| serviceType: LoadBalancer # Change to ClusterIP for ingress | ||
| serviceNodePort: null | ||
| externalPort: 8200 | ||
|
|
||
| # Annotations for Load Balancer | ||
| annotations: | ||
| service.beta.kubernetes.io/azure-load-balancer-internal: "false" # Public LB for access | ||
| # For AWS: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" | ||
| # For GCP: cloud.google.com/load-balancer-type: "External" |
There was a problem hiding this comment.
Vault UI is configured as a public LoadBalancer (azure-load-balancer-internal: "false"). Exposing Vault’s UI/API publicly by default is a high-risk posture for a secrets system. Prefer ClusterIP with an ingress restricted by IP allowlists/auth, or at minimum default to an internal LB and document how to intentionally expose it.
| serviceType: LoadBalancer # Change to ClusterIP for ingress | |
| serviceNodePort: null | |
| externalPort: 8200 | |
| # Annotations for Load Balancer | |
| annotations: | |
| service.beta.kubernetes.io/azure-load-balancer-internal: "false" # Public LB for access | |
| # For AWS: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" | |
| # For GCP: cloud.google.com/load-balancer-type: "External" | |
| serviceType: ClusterIP # Secure default: internal access; use ingress or override to expose externally | |
| serviceNodePort: null | |
| externalPort: 8200 | |
| # Annotations for Service / Load Balancer | |
| # Default: no public load balancer exposure. Use ingress with IP allowlists/auth for external access. | |
| annotations: {} | |
| # To use an internal Azure Load Balancer instead of ClusterIP, override these values: | |
| # ui: | |
| # serviceType: LoadBalancer | |
| # annotations: | |
| # service.beta.kubernetes.io/azure-load-balancer-internal: "true" # Internal LB | |
| # For AWS Network Load Balancer (example only, not enabled by default): | |
| # ui: | |
| # serviceType: LoadBalancer | |
| # annotations: | |
| # service.beta.kubernetes.io/aws-load-balancer-type: "nlb" | |
| # For GCP external load balancer (example only, not enabled by default): | |
| # ui: | |
| # serviceType: LoadBalancer | |
| # annotations: | |
| # cloud.google.com/load-balancer-type: "External" |
| param enableDeploymentKeyVault bool = (secretProvider == 'azurekeyvault') | ||
| param deploymentKeyVaultName string = '${baseName}-deploy-kv' | ||
|
|
||
| // HashiCorp Vault parameters (when secretProvider = hashicorpvault) | ||
| param enableHashiCorpVault bool = (secretProvider == 'hashicorpvault') | ||
| param vaultContainerName string = 'vault-${baseName}' | ||
| param vaultDnsLabel string = 'cho-vault-${uniqueString(resourceGroup().id)}' | ||
| param vaultVersion string = '1.15.4' | ||
|
|
||
| // ========================= | ||
| // Variables |
There was a problem hiding this comment.
enableDeploymentKeyVault and enableHashiCorpVault are still declared as overridable parameters even though secretProvider is intended to be the single selector. This allows inconsistent combinations (e.g., secretProvider=hashicorpvault but enableDeploymentKeyVault=true) and makes deployments harder to reason about. Consider making these vars derived from secretProvider (or explicitly document/validate precedence).
| param enableDeploymentKeyVault bool = (secretProvider == 'azurekeyvault') | |
| param deploymentKeyVaultName string = '${baseName}-deploy-kv' | |
| // HashiCorp Vault parameters (when secretProvider = hashicorpvault) | |
| param enableHashiCorpVault bool = (secretProvider == 'hashicorpvault') | |
| param vaultContainerName string = 'vault-${baseName}' | |
| param vaultDnsLabel string = 'cho-vault-${uniqueString(resourceGroup().id)}' | |
| param vaultVersion string = '1.15.4' | |
| // ========================= | |
| // Variables | |
| param deploymentKeyVaultName string = '${baseName}-deploy-kv' | |
| // HashiCorp Vault parameters (when secretProvider = hashicorpvault) | |
| param vaultContainerName string = 'vault-${baseName}' | |
| param vaultDnsLabel string = 'cho-vault-${uniqueString(resourceGroup().id)}' | |
| param vaultVersion string = '1.15.4' | |
| // ========================= | |
| // Variables | |
| var enableDeploymentKeyVault = (secretProvider == 'azurekeyvault') | |
| var enableHashiCorpVault = (secretProvider == 'hashicorpvault') |
| listener "tcp" { | ||
| address = "0.0.0.0:8200" | ||
| tls_disable = 1 | ||
| } |
There was a problem hiding this comment.
The Vault ACI configuration disables TLS for the Vault listener (tls_disable = 1) and binds it to 0.0.0.0:8200, which means all Vault traffic (including tokens and secrets) is exposed in cleartext over HTTP. An attacker on the network or the public internet (given the public IP on this container group) can eavesdrop on or tamper with Vault API calls and potentially steal credentials or data. Configure Vault to require TLS (tls_disable = 0), use HTTPS addresses for api_addr/cluster_addr, and restrict exposure to a private network or internal endpoint instead of a public HTTP listener.
| print_info "To access Vault UI:" | ||
| echo " kubectl port-forward -n vault svc/vault 8200:8200" | ||
| echo " Open: http://localhost:8200" | ||
| echo " Token: $(jq -r '.root_token' vault-keys.json 2>/dev/null || echo 'See vault-keys.json')" |
There was a problem hiding this comment.
The setup script prints the Vault root_token directly to standard output, which can end up in shell history or centralized logs and exposes the highest‑privilege credential for the Vault cluster. Anyone with access to those logs could fully compromise Vault, read or modify all secrets, and change policies. Avoid ever logging the root token; instead, instruct operators to retrieve it manually from a secure file or password manager and ensure it is only stored in dedicated secure secret storage.
| echo " Token: $(jq -r '.root_token' vault-keys.json 2>/dev/null || echo 'See vault-keys.json')" | |
| echo " Token: Retrieve the initial root token securely from vault-keys.json or your secret store (never expose it in logs or shell history)" |
| vault policy write cho-microservices - <<EOF | ||
| # Allow reading all application secrets | ||
| path "secret/data/cloudhealthoffice/*" { | ||
| capabilities = ["read", "list"] | ||
| } | ||
|
|
||
| # Allow encryption/decryption for PHI | ||
| path "transit/encrypt/phi-encryption" { | ||
| capabilities = ["update"] | ||
| } | ||
|
|
||
| path "transit/decrypt/phi-encryption" { | ||
| capabilities = ["update"] | ||
| } | ||
|
|
||
| # Allow reading own token | ||
| path "auth/token/lookup-self" { | ||
| capabilities = ["read"] | ||
| } | ||
|
|
||
| # Allow renewing own token | ||
| path "auth/token/renew-self" { | ||
| capabilities = ["update"] | ||
| } | ||
| EOF | ||
|
|
||
| # Create Kubernetes role for microservices | ||
| vault write auth/kubernetes/role/cho-microservices \ | ||
| bound_service_account_names=cho-service-account,member-service-sa,claims-service-sa,eligibility-service-sa,authorization-service-sa,coverage-service-sa,provider-service-sa,tenant-service-sa,appeals-service-sa,attachment-service-sa,benefit-plan-service-sa,claims-scrubbing-service-sa,enrollment-import-service-sa,payment-service-sa,reference-data-service-sa,sponsor-service-sa,trading-partner-service-sa \ | ||
| bound_service_account_namespaces=cho-svcs \ | ||
| policies=cho-microservices \ | ||
| ttl=1h \ | ||
| max_ttl=24h |
There was a problem hiding this comment.
The cho-microservices Vault policy and Kubernetes role allow any bound service account to read and list all secrets under secret/data/cloudhealthoffice/* and use the PHI transit key, so compromise of a single microservice pod yields access to every application secret. This breaks least‑privilege isolation between services and significantly amplifies the impact of any RCE or token theft. Define per‑service policies and roles that restrict each service account to only the specific secret paths and transit operations it requires instead of a shared, wide‑open policy.
Switches secret management from Azure Key Vault to HashiCorp Vault to enable deployment across Azure, AWS, GCP, and on-premises. Maintains Azure Key Vault as fallback for backward compatibility.
Infrastructure
Kubernetes deployment (
infra/vault/values.yaml)Azure Container Instances (
infra/modules/vault-aci.bicep)Bicep orchestration (
infra/main.bicep)secretProviderparameter:azurekeyvault|hashicorpvault|noneApplication Integration
Shared configuration provider (
src/shared/Configuration/VaultConfigurationExtensions.cs)Features:
CI/CD
GitHub Actions workflow (
deploy.yml)Automation
scripts/setup-vault.shscripts/add-vault-packages.shKubernetes
ServiceAccounts (
infra/k8s/vault-serviceaccount.yaml)Documentation
docs/security/HASHICORP-VAULT-INTEGRATION.mdsrc/shared/Configuration/README.mdSECURITY.mdMigration Path
Existing Azure Key Vault deployments continue working. To adopt HashiCorp Vault:
helm install vault hashicorp/vault --values infra/vault/values.yaml./scripts/setup-vault.sh./scripts/add-vault-packages.shProgram.csin each service with configuration line aboveVAULT_ADDR,VAULT_ROLE_ID,VAULT_SECRET_IDNo forced migration - deployments choose via
secretProviderparameter.Original prompt
This section details on the original issue you should resolve
<issue_title>[v4.0] HashiCorp Key Vault Integration - Production Security Hardening</issue_title>
<issue_description>---
name: 'Azure Key Vault Integration'
about: Migrate all secrets to Azure Key Vault for production-grade security
title: '[v4.0] Azure Key Vault Integration - Production Security Hardening'
labels: 'security, infrastructure, priority:critical'
assignees: ''
🎯 Objective
Migrate all application secrets from environment variables and configuration files to Azure Key Vault with managed identity authentication. This is a BLOCKER for Beta launch and clearinghouse integration.
Priority: 🔴 CRITICAL
Effort: 1-2 weeks (1 developer)
Depends On: Azure subscription with Key Vault quota
Blocks: Clearinghouse integration, customer onboarding
📋 Success Criteria
🔧 Implementation Steps
Phase 1: Azure Key Vault Setup (Day 1)
1.1 Create Key Vault Resource
1.2 Enable Audit Logging
1.3 Configure Managed Identity for AKS
Phase 2: Migrate Secrets (Days 2-3)
2.1 Audit Current Secrets
Create inventory of all secrets currently in:
2.2 Upload Secrets to Key Vault
2.3 Clearinghouse Credentials (for future integration)