You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhance GitHub Copilot Service Proxy with performance optimizations and timeout configurations
- Gzip the built binary for release assets in GitHub Actions workflow.
- Update README.md to reflect project name change and add performance features.
- Introduce timeout configurations in config.go and config.example.json.
- Implement circuit breaker and request coalescing in proxy.go for improved reliability.
- Add profiling endpoints for monitoring and performance analysis.
- Refactor HTTP client initialization to use shared client with configurable timeouts.
- Enhance worker pool management for concurrent request processing.
- Implement graceful shutdown for worker pool during server termination.
Copy file name to clipboardExpand all lines: README.md
+92-4Lines changed: 92 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# GitHub Copilot SVCS Proxy
1
+
# GitHub Copilot Service Proxy
2
2
3
3
This project provides a reverse proxy for GitHub Copilot, exposing OpenAI-compatible endpoints for use with tools and clients that expect the OpenAI API. It follows the authentication and token management approach used by [OpenCode](https://github.com/sst/opencode).
4
4
@@ -21,6 +21,8 @@ This project provides a reverse proxy for GitHub Copilot, exposing OpenAI-compat
21
21
-**Graceful Shutdown**: Proper signal handling and graceful server shutdown
22
22
-**Comprehensive Logging**: Request/response logging for debugging and monitoring
23
23
-**Enhanced CLI Commands**: Status monitoring, manual token refresh, and detailed configuration display
@@ -139,6 +180,15 @@ GET http://localhost:8081/v1/models
139
180
GET http://localhost:8081/health
140
181
```
141
182
183
+
### Profiling Endpoints (Production Monitoring)
184
+
```bash
185
+
GET http://localhost:8081/debug/pprof/ # Overview of available profiles
186
+
GET http://localhost:8081/debug/pprof/heap # Memory heap profile
187
+
GET http://localhost:8081/debug/pprof/goroutine # Goroutine profile
188
+
GET http://localhost:8081/debug/pprof/profile # CPU profile (30s sampling)
189
+
GET http://localhost:8081/debug/pprof/trace # Execution trace
190
+
```
191
+
142
192
## Reliability & Error Handling
143
193
144
194
### Automatic Token Management
@@ -182,7 +232,19 @@ The configuration is stored in `~/.local/share/github-copilot-svcs/config.json`:
182
232
"github_token": "gho_...",
183
233
"copilot_token": "ghu_...",
184
234
"expires_at": 1720000000,
185
-
"refresh_in": 1500
235
+
"refresh_in": 1500,
236
+
"timeouts": {
237
+
"http_client": 300,
238
+
"server_read": 30,
239
+
"server_write": 300,
240
+
"server_idle": 120,
241
+
"proxy_context": 300,
242
+
"circuit_breaker": 30,
243
+
"keep_alive": 30,
244
+
"tls_handshake": 10,
245
+
"dial_timeout": 10,
246
+
"idle_conn_timeout": 90
247
+
}
186
248
}
187
249
```
188
250
@@ -194,6 +256,32 @@ The configuration is stored in `~/.local/share/github-copilot-svcs/config.json`:
194
256
-`expires_at`: Unix timestamp when the Copilot token expires
195
257
-`refresh_in`: Seconds until token should be refreshed (typically 1500 = 25 minutes)
196
258
259
+
### Timeout Configuration
260
+
261
+
All timeout values are specified in seconds and have sensible defaults:
262
+
263
+
| Field | Default | Description |
264
+
|-------|---------|-------------|
265
+
|`http_client`| 300 | HTTP client timeout for outbound requests to GitHub Copilot API |
266
+
|`server_read`| 30 | Server timeout for reading incoming requests |
267
+
|`server_write`| 300 | Server timeout for writing responses (increased for streaming) |
268
+
|`server_idle`| 120 | Server timeout for idle connections |
269
+
|`proxy_context`| 300 | Request context timeout for proxy operations |
270
+
|`circuit_breaker`| 30 | Circuit breaker recovery timeout when API is failing |
271
+
|`keep_alive`| 30 | TCP keep-alive timeout for HTTP connections |
272
+
|`tls_handshake`| 10 | TLS handshake timeout |
273
+
|`dial_timeout`| 10 | Connection dial timeout |
274
+
|`idle_conn_timeout`| 90 | Idle connection timeout in connection pool |
275
+
276
+
**Streaming Support**: The service is optimized for long-running streaming chat completions with timeouts up to 300 seconds (5 minutes) to support extended AI conversations.
277
+
278
+
**Custom Configuration**: You can copy `config.example.json` as a starting point and modify timeout values based on your environment:
0 commit comments