Spring Load Development is a project that demonstrates the integration of various modern technologies and frameworks to build a robust microservices architecture. This project showcases the use of Spring Boot for microservices, Spring WebFlux for reactive RESTful web services, Spring Data R2DBC for reactive database connectivity, Spring Cloud Config for externalized configuration management, Spring Cloud Gateway for API gateway, Resilience4J for circuit breaker patterns, and Spring AI for Model Context Protocol (MCP) server integration.
|
Note
|
Load development is used here as an example business domain to demonstrate microservices architecture and modern cloud-native patterns. In the context of competitive shooting, load development refers to the process of systematically testing and refining ammunition components (such as powder charge, bullet, primer, and case dimensions) to achieve optimal accuracy and performance for a specific firearm. This project manages and analyzes load development data as a sample use case, but the architecture and patterns are applicable to many other domains. |
-
Microservices using Spring Boot
-
Reactive RESTful Web Services using Spring WebFlux
-
Reactive Relational Database Connectivity using Spring Data R2DBC
-
Externalized Configuration Management using Spring Cloud Config
-
Service Discovery using Spring Cloud Netflix
-
Role-Based Access Control (RBAC) using Spring Security and Keycloak
-
API Gateway using Spring Cloud Gateway
-
Circuit Breaker using Spring Cloud Circuit Breaker and Resilience4J
-
AI Integration using Spring AI and Model Context Protocol (MCP)
-
Observability and Monitoring using OpenTelemetry (via Spring Boot starter), Tempo, Loki, Prometheus, and Grafana
-
Container Orchestration using Kubernetes and Helm
|
Note
|
All technologies are open source and widely adopted in cloud-native Java ecosystems. Version details can be found in the respective |
This project uses the following key versions:
-
Java: 25
-
Spring Boot: 4.0.3
-
Spring Cloud: 2025.1.1
-
Spring AI: 2.0
-
SpringDoc OpenAPI: 3.0.0
-
MapStruct: 1.6.3
-
OpenTelemetry Instrumentation: 2.25.0 (+ alpha)
-
Maven: 4.0.0-rc-5 or later
-
PostgreSQL: 18.2
-
Keycloak: 26.5
-
Grafana: 12.3.3
-
Loki: 3.6.5
-
Tempo: 2.10.0
-
Prometheus: v3.9.1
-
OpenTelemetry Collector: 0.145.0
-
Java 25 or later
-
Docker and Docker Compose
-
Maven 4.0+ (4.0.0-rc-5 or later)
-
Kubernetes cluster and kubectl (for Kubernetes deployment)
-
Helm 3.0+ (for Helm deployment)
The Spring Load Development application can be deployed in multiple ways:
-
Docker Compose - For local development and testing
-
Kubernetes with Helm - For production deployment
-
Manual Spring Boot - For development debugging
git clone https://github.com/zhoozhoo/spring-load-development.git
cd spring-load-development# Add and update helm repo (if using a helm repository)
helm repo update
# Create required namespaces first
cd helm/spring-load-development
./create-namespaces.sh
# Install the application
helm install spring-load-development . \
--values values.yaml \
--timeout 300s# Get service URLs
kubectl get services -n reloading
# For local development, port-forward to access services
kubectl port-forward -n reloading service/api-gateway 8080:8080For detailed Helm chart configuration, see Helm Chart Documentation.
docker-compose --env-file .env up -d postgres keycloak grafana loki tempo prometheus otel-collector# IMPORTANT: All services default to port 8080. To run them side-by-side locally, supply a distinct port with -Dserver.port.
# 1. Start Config Server (required) on 8888
java -Dserver.port=8888 -jar config-server/target/config-server-*.jar
# 2. Start Discovery Server on 8761
java -Dserver.port=8761 -jar discovery-server/target/discovery-server-*.jar
# 3. Start API Gateway on 8080 (external entrypoint)
java -jar api-gateway/target/api-gateway-*.jar
# 4. Start microservices (distinct ports)
java -Dserver.port=8081 -jar loads-service/target/loads-service-*.jar
java -Dserver.port=8082 -jar rifles-service/target/rifles-service-*.jar
java -Dserver.port=8083 -jar components-service/target/components-service-*.jar
java -Dserver.port=8084 -jar mcp-server/target/mcp-server-*.jar|
Tip
|
You can also override ports via environment variable SERVER_PORT or an application-local.yml profile.
|
Once the services are up and running, you can access them at the following URLs:
-
API Gateway: http://localhost:8080
-
Keycloak Admin Console: http://localhost:7080
-
Grafana Dashboard: http://localhost:3000
For Kubernetes deployment, services are accessible via NodePort or port-forwarding:
# Port-forward API Gateway
kubectl port-forward -n reloading service/api-gateway 8080:8080
# Port-forward Grafana
kubectl port-forward -n observability service/grafana-service 3000:3000
# Port-forward Keycloak
kubectl port-forward -n keycloak service/keycloak-service 7080:8080Alternatively, if using NodePort services, check the assigned ports:
kubectl get services -A | grep NodePortThe project includes an MCP (Model Context Protocol) server that provides AI-assisted tools for managing loads and rifles:
-
Integration with GitHub Copilot through the Model Context Protocol
-
AI-assisted load development analysis
-
Intelligent rifle configuration recommendations
-
Natural language queries for load data
To connect GitHub Copilot to the MCP server, configure the .vscode/mcp.json file in your project directory:
{
"servers": {
"reloading-mcp-server": {
"type": "sse",
"url": "http://localhost:8080/sse"
}
}
}API endpoints are documented using OpenAPI (Swagger). Once services are running, access the documentation at:
Alternatively, use the .http files in the top-level test/ directory with the VS Code REST Client extension for manual testing. (API Testing Guide)
The application is composed of the following services:
-
Config Server: Centralized configuration management for all services
-
Discovery Server: Service registry and discovery using Eureka
-
API Gateway: Routes and filters requests to appropriate services
-
Rifles Service: Manages rifle data and configurations
-
Loads Service: Handles load development data including groups and shots
-
Components Service: Manages reloading components (cases, propellants/powders, primers, projectiles/bullets) with full-text search capabilities
-
MCP Server: Provides AI-assisted tools via Model Context Protocol for loads and rifles management (MCP Server Guide)
-
Common Library: Shared DTOs, mappers, utilities (not a runtime service)
-
Integration Tests Module: End‑to‑end verification (build-time only)
The centralized configuration for all services is stored in a separate GitHub repository: https://github.com/zhoozhoo/spring-load-development-config
The Config Server automatically picks up configuration files from this repository at startup.
The application uses Keycloak for identity and access management with the following features:
-
Role-based access control (RBAC)
-
JWT token-based authentication
-
OAuth2/OpenID Connect integration
-
Predefined roles: RELOADER
-
Fine-grained permissions for loads and rifles management
The project includes a comprehensive observability stack with multiple components working together:
The project uses a modern observability stack with:
-
OpenTelemetry Collector: Centralized collection of telemetry data (traces, logs, metrics)
-
Tempo: Distributed tracing backend for storing and querying traces
-
Loki: Log aggregation system for centralized log storage and querying
-
Prometheus: Metrics collection and alerting
-
Grafana: Unified dashboards for visualizing metrics, traces, and logs
Access the observability components at: * Grafana (unified dashboards): http://localhost:3000 * Prometheus (metrics): http://localhost:9091
The diagram below shows the monitoring data flow used by this project. Services export metrics via the Actuator Prometheus endpoint and send telemetry (traces and logs) over OTLP to an OpenTelemetry Collector. The Collector routes traces to Tempo, logs to Loki and forwards metrics to Prometheus. Grafana visualizes metrics, traces and logs from all three backends.
flowchart LR
subgraph Services["Services / Applications"]
GW["API Gateway"]
LS["Loads Service"]
RS["Rifles Service"]
CS["Components Service"]
MCP["MCP Server"]
KC["Keycloak"]
end
subgraph OTEL["Observability Plane"]
OC["OpenTelemetry Collector\n:4317 / :4318"]
Tempo["Tempo\n(Trace Store)"]
Loki["Loki\n(Log Store)"]
Prom["Prometheus"]
Graf["Grafana\n:3000"]
end
%% Services -> Collector
GW -->|"OTLP"| OC
LS -->|"OTLP"| OC
RS -->|"OTLP"| OC
CS -->|"OTLP"| OC
MCP -->|"OTLP"| OC
KC -->|"OTLP (traces)"| OC
%% Collector -> backends
OC -->|"traces"| Tempo
OC -->|"logs"| Loki
OC -->|"metrics"| Prom
%% Visualisation
Tempo -->|traces| Graf
Loki -->|logs| Graf
Prom -->|metrics| Graf
classDef serviceStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000000
class GW,LS,RS,CS,MCP,KC serviceStyle
class OC,Tempo,Loki,Prom,Graf obsStyle
This observability architecture shows how all telemetry data (traces, logs, and metrics) is centrally collected by the OpenTelemetry Collector and distributed to specialized backends: Tempo for traces, Loki for logs, and Prometheus for metrics. Grafana provides unified dashboards combining all three data types for comprehensive observability.
The Spring Load Development application supports two primary deployment models: Docker Compose for local development and Kubernetes with Helm for production deployments.
flowchart TB
User(["User / Client"]) -->|REST :8080| DC_GW
subgraph "Docker Compose - Local Development"
subgraph "Application Services"
DC_GW[API Gateway<br/>:8080]
DC_LOADS[Loads Service<br/>:8081]
DC_RIFLES[Rifles Service<br/>:8082]
DC_COMP[Components Service<br/>:8084]
DC_MCP[MCP Server<br/>:8083]
end
subgraph "Support Services"
DC_CONFIG[Config Server<br/>:8888]
DC_DISC[Discovery Server<br/>:8761]
end
subgraph "Infrastructure"
DC_PG[(PostgreSQL<br/>:5432)]
DC_KC[Keycloak<br/>:7080]
DC_KC_DB[(Keycloak DB<br/>:5433)]
end
subgraph "Observability"
DC_GRAF[Grafana<br/>:3000]
DC_LOKI[Loki]
DC_TEMPO[Tempo]
DC_PROM[Prometheus]
DC_OTEL[OTEL Collector]
end
end
%% Gateway routes
DC_GW --> DC_LOADS
DC_GW --> DC_RIFLES
DC_GW --> DC_COMP
DC_GW --> DC_MCP
%% Database connections (MCP Server has no DB)
DC_LOADS --> DC_PG
DC_RIFLES --> DC_PG
DC_COMP --> DC_PG
%% MCP Server calls downstream services
DC_MCP -.->|HTTP| DC_LOADS
DC_MCP -.->|HTTP| DC_RIFLES
%% Auth connections
DC_GW -->|OAuth2/UMA| DC_KC
DC_KC --> DC_KC_DB
%% Config & Discovery
DC_LOADS -.-> DC_CONFIG
DC_RIFLES -.-> DC_CONFIG
DC_COMP -.-> DC_CONFIG
DC_MCP -.-> DC_CONFIG
DC_LOADS -.-> DC_DISC
DC_RIFLES -.-> DC_DISC
DC_COMP -.-> DC_DISC
DC_MCP -.-> DC_DISC
%% Telemetry
DC_GW --> DC_OTEL
DC_LOADS --> DC_OTEL
DC_RIFLES --> DC_OTEL
DC_COMP --> DC_OTEL
DC_MCP --> DC_OTEL
DC_KC -->|traces| DC_OTEL
DC_OTEL --> DC_LOKI
DC_OTEL --> DC_TEMPO
DC_OTEL --> DC_PROM
DC_LOKI --> DC_GRAF
DC_TEMPO --> DC_GRAF
DC_PROM --> DC_GRAF
classDef clientStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000000
classDef appStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef supportStyle fill:#e8eaf6,stroke:#3949ab,stroke-width:2px,color:#000000
classDef infraStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000000
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000000
class User clientStyle
class DC_GW,DC_LOADS,DC_RIFLES,DC_COMP,DC_MCP appStyle
class DC_CONFIG,DC_DISC supportStyle
class DC_PG,DC_KC,DC_KC_DB infraStyle
class DC_GRAF,DC_LOKI,DC_TEMPO,DC_PROM,DC_OTEL obsStyle
Docker Compose Benefits:
-
Single command deployment:
docker-compose up -d -
Automatic service networking and discovery
-
Easy local debugging with port mapping
-
Fast iteration during development
-
Minimal resource requirements
-
Simple configuration via
.envfile
flowchart TB
User(["User / Client"]) -->|REST :30090| K8S_GW
subgraph "Kubernetes Cluster"
subgraph "Namespace: reloading"
K8S_CM[/ConfigMaps/]
K8S_GW[API Gateway<br/>NodePort: 30090]
K8S_LOADS[Loads Service]
K8S_RIFLES[Rifles Service]
K8S_COMP[Components Service]
K8S_MCP[MCP Server]
end
subgraph "Namespace: postgres"
K8S_PG[(PostgreSQL StatefulSet<br/>8Gi PVC)]
end
subgraph "Namespace: keycloak"
K8S_KC_PG[(Keycloak PostgreSQL<br/>StatefulSet<br/>8Gi PVC)]
K8S_KC[Keycloak<br/>NodePort: 30080]
end
subgraph "Namespace: observability"
K8S_GRAF[Grafana<br/>NodePort: 30000<br/>5Gi PVC]
K8S_LOKI[Loki<br/>10Gi PVC]
K8S_TEMPO[Tempo<br/>10Gi PVC]
K8S_PROM[Prometheus<br/>10Gi PVC]
K8S_OTEL[OTEL Collector<br/>NodePort: 30317]
end
end
%% Gateway routes
K8S_GW --> K8S_LOADS
K8S_GW --> K8S_RIFLES
K8S_GW --> K8S_COMP
K8S_GW --> K8S_MCP
%% Database connections (MCP Server has no DB)
K8S_LOADS --> K8S_PG
K8S_RIFLES --> K8S_PG
K8S_COMP --> K8S_PG
%% MCP Server calls downstream services
K8S_MCP -.->|HTTP| K8S_LOADS
K8S_MCP -.->|HTTP| K8S_RIFLES
%% ConfigMaps (replaces Config Server in K8s)
K8S_CM -.-> K8S_GW
K8S_CM -.-> K8S_LOADS
K8S_CM -.-> K8S_RIFLES
K8S_CM -.-> K8S_COMP
K8S_CM -.-> K8S_MCP
%% Auth
K8S_GW -->|OAuth2/UMA| K8S_KC
K8S_KC --> K8S_KC_PG
%% Telemetry
K8S_GW --> K8S_OTEL
K8S_LOADS --> K8S_OTEL
K8S_RIFLES --> K8S_OTEL
K8S_COMP --> K8S_OTEL
K8S_MCP --> K8S_OTEL
K8S_KC -->|traces| K8S_OTEL
K8S_OTEL --> K8S_LOKI
K8S_OTEL --> K8S_TEMPO
K8S_OTEL --> K8S_PROM
K8S_LOKI --> K8S_GRAF
K8S_TEMPO --> K8S_GRAF
K8S_PROM --> K8S_GRAF
classDef clientStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000000
classDef appStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef configStyle fill:#e8eaf6,stroke:#3949ab,stroke-width:2px,color:#000000
classDef infraStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000000
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000000
class User clientStyle
class K8S_GW,K8S_LOADS,K8S_RIFLES,K8S_COMP,K8S_MCP appStyle
class K8S_CM configStyle
class K8S_PG,K8S_KC_PG,K8S_KC infraStyle
class K8S_GRAF,K8S_LOKI,K8S_TEMPO,K8S_PROM,K8S_OTEL obsStyle
Kubernetes Benefits:
-
Production-ready with high availability support
-
Automatic scaling and self-healing
-
Persistent storage with StatefulSets
-
Namespace isolation for security
-
ConfigMaps and Secrets management
-
Rolling updates and rollbacks
-
Resource limits and health checks
-
Helm chart for easy management
| Aspect | Docker Compose | Kubernetes |
|---|---|---|
Deployment |
Single file, one command |
Helm chart with multiple resources |
Scaling |
Manual container scaling |
Horizontal pod autoscaling |
Networking |
Simple port mapping |
Services, NodePorts, Ingress |
Storage |
Docker volumes |
PersistentVolumeClaims, StatefulSets |
Configuration |
Environment variables, .env files |
ConfigMaps, Secrets |
Use Case |
Local development, testing |
Production, staging, multi-node |
Resource Overhead |
Minimal |
Requires cluster infrastructure |
Service Discovery |
Docker DNS |
Kubernetes DNS, Service discovery |
Updates |
Manual restart |
Rolling updates, zero downtime |
flowchart LR
subgraph Clients
User[User]
Copilot[GitHub Copilot]
end
APIGateway["API Gateway<br/>:8080<br/>CircuitBreaker"]
subgraph Microservices
LoadsService[Loads Service]
RiflesService[Rifles Service]
ComponentsService[Components Service]
MCPServer[MCP Server]
end
subgraph DockerOnly["Infrastructure - Docker/Local only"]
ConfigServer["Config Server<br/>:8888"]
DiscoveryServer["Discovery Server<br/>:8761"]
end
Keycloak["Keycloak Auth<br/>:7080"]
subgraph Observability
OTEL[OTel Collector]
Prom[Prometheus]
Loki[Loki]
Tempo[Tempo]
Graf[Grafana]
end
Postgres[(PostgreSQL 18)]
User -->|REST| APIGateway
Copilot -->|MCP SSE| APIGateway
APIGateway --> LoadsService
APIGateway --> RiflesService
APIGateway --> ComponentsService
APIGateway --> MCPServer
%% Database connections (MCP Server has no DB)
LoadsService --> Postgres
RiflesService --> Postgres
ComponentsService --> Postgres
%% MCP Server calls downstream services, not DB
MCPServer -.->|HTTP| LoadsService
MCPServer -.->|HTTP| RiflesService
%% Auth
APIGateway -->|OAuth2/UMA| Keycloak
LoadsService -->|JWT| Keycloak
RiflesService -->|JWT| Keycloak
ComponentsService -->|JWT| Keycloak
MCPServer -->|JWT| Keycloak
%% Config & Discovery (Docker/Local only)
LoadsService -.->|Config| ConfigServer
RiflesService -.->|Config| ConfigServer
ComponentsService -.->|Config| ConfigServer
MCPServer -.->|Config| ConfigServer
LoadsService -.->|Register| DiscoveryServer
RiflesService -.->|Register| DiscoveryServer
ComponentsService -.->|Register| DiscoveryServer
MCPServer -.->|Register| DiscoveryServer
%% Telemetry
APIGateway -->|Telemetry| OTEL
LoadsService -->|Telemetry| OTEL
RiflesService -->|Telemetry| OTEL
ComponentsService -->|Telemetry| OTEL
MCPServer -->|Telemetry| OTEL
OTEL --> Prom
OTEL --> Loki
OTEL --> Tempo
Prom --> Graf
Loki --> Graf
Tempo --> Graf
classDef clientStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000000
classDef serviceStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef infraStyle fill:#e8eaf6,stroke:#3949ab,stroke-width:2px,color:#000000
classDef authStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000000
classDef dataStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000000
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000000
class User,Copilot clientStyle
class APIGateway,LoadsService,RiflesService,ComponentsService,MCPServer serviceStyle
class ConfigServer,DiscoveryServer infraStyle
class Keycloak authStyle
class Postgres dataStyle
class OTEL,Prom,Loki,Tempo,Graf obsStyle
The application uses PostgreSQL 18.0 with the following schema design. Measurement columns use JSONB to store JSR-385 Quantity values (e.g. {"value": 175.5, "unit": "grain"}) and JSR-354 MonetaryAmount values (e.g. {"amount": 45.99, "currency": "USD"}). The component tables (projectiles, propellants, primers, cases) include full-text search via PostgreSQL’s tsvector type with GIN indexes, enabling efficient natural language searches across manufacturer names, types, and other attributes. Tables are grouped by owning service — each service uses its own schema.
erDiagram
%% ── Loads Service ──
LOADS {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
VARCHAR name NOT NULL
TEXT description
VARCHAR powder_manufacturer NOT NULL
VARCHAR powder_type NOT NULL
VARCHAR bullet_manufacturer NOT NULL
VARCHAR bullet_type NOT NULL
JSONB bullet_weight NOT NULL "Quantity Mass"
VARCHAR primer_manufacturer NOT NULL
VARCHAR primer_type NOT NULL
JSONB distance_from_lands "Quantity Length"
JSONB case_overall_length "Quantity Length"
JSONB neck_tension "Quantity Length"
BIGINT rifle_id FK
}
GROUPS {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
BIGSERIAL load_id FK NOT NULL
DATE date NOT NULL
JSONB powder_charge NOT NULL "Quantity Mass"
JSONB target_range NOT NULL "Quantity Length"
JSONB group_size "Quantity Length"
}
SHOTS {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
BIGSERIAL group_id FK NOT NULL
JSONB velocity "Quantity Speed"
}
%% ── Rifles Service ──
RIFLES {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
VARCHAR name NOT NULL
TEXT description
VARCHAR caliber NOT NULL
JSONB barrel_length "Quantity Length"
VARCHAR barrel_contour
JSONB rifling "twistRate and twistDirection"
JSONB zeroing "sightHeight and zeroDistance"
}
%% ── Components Service ──
PROJECTILES {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
VARCHAR manufacturer NOT NULL
JSONB weight NOT NULL "Quantity Mass"
VARCHAR type NOT NULL
JSONB cost NOT NULL "MonetaryAmount"
INTEGER quantity_per_box NOT NULL
TSVECTOR search_vector "GIN indexed"
}
PROPELLANTS {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
VARCHAR manufacturer NOT NULL
VARCHAR type NOT NULL
JSONB cost "MonetaryAmount"
JSONB weight_per_container "Quantity Mass"
TSVECTOR search_vector "GIN indexed"
}
PRIMERS {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
VARCHAR manufacturer NOT NULL
VARCHAR type NOT NULL
VARCHAR size NOT NULL
JSONB cost NOT NULL "MonetaryAmount"
JSONB quantity_per_box NOT NULL "Quantity Dimensionless"
TSVECTOR search_vector "GIN indexed"
}
CASES {
BIGSERIAL id PK
VARCHAR owner_id NOT NULL
VARCHAR manufacturer NOT NULL
VARCHAR caliber NOT NULL
VARCHAR primer_size NOT NULL
JSONB cost NOT NULL "MonetaryAmount"
JSONB quantity_per_box NOT NULL "Quantity Dimensionless"
TSVECTOR search_vector "GIN indexed"
}
%% Relationships
LOADS ||--o{ GROUPS : "has"
GROUPS ||--o{ SHOTS : "has"
RIFLES ||--o{ LOADS : "uses"
The Components Service provides full-text search functionality for all component types (cases, powders, primers, bullets). Each component table includes a search_vector column of type tsvector that is automatically maintained and indexed for fast searching.
Search Features:
-
Natural language queries across manufacturer names, types, and attributes
-
Fuzzy matching and relevance ranking
-
Case-insensitive searches
-
Support for partial word matching
-
PostgreSQL GIN indexes for optimal performance
Example Search Queries:
-
Find all Lapua cases:
GET /api/cases/search?query=Lapua -
Search for H4350 powder:
GET /api/propellants/search?query=H4350 -
Find 140 grain bullets:
GET /api/projectiles/search?query=140 -
Search for CCI primers:
GET /api/primers/search?query=CCI
For more details on search functionality and API usage, refer to the API Testing Guide.
-
❏ Update MCP server to support resources and prompts
-
❏ Add brass case attributes such as neck tension, headspace, etc.
-
❏ Implement load comparison and analysis tools
-
❏ Add shooting session tracking and analytics
-
❏ Enhance full-text search capabilities for loads and rifles
-
❏ Add ballistic coefficient calculations and external ballistics