-
Notifications
You must be signed in to change notification settings - Fork 0
Move the go examples to use go-agent #82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add Node.js 10 Express example with OpenTelemetry instrumentation - Add Python Sanic async event loop diagnostics with custom metrics - Include event loop lag monitoring and blocking detection 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
Add .gitignore files to prevent committing binaries. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
| message: `Delayed response after ${delay}ms`, | ||
| trace_id: getCurrentTraceId(), | ||
| }); | ||
| }, delay); |
Check failure
Code scanning / CodeQL
Resource exhaustion High
user-provided value
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix this type of issue you must validate and bound any user-controlled duration before passing it to setTimeout (or similar APIs). Reject values that are not finite integers or that exceed a reasonable maximum, and optionally enforce a minimum of zero.
For this codebase, the best fix is to validate req.query.delay in the /slow route, enforce an upper bound (for example, 10 seconds), and return a 400 Bad Request for invalid or out-of-range values. This preserves existing behavior for typical small delays while preventing attackers from creating arbitrarily long-lived timers. Concretely, in nodejs/node10-express/app.js:
- Replace the current line that computes
delaywith a slightly more explicit parse. - Add checks:
- If
delayisNaNor negative, respond with HTTP 400 and an error message. - If
delayexceeds a chosen maximum (for example,MAX_DELAY_MS = 10000), respond with HTTP 400 and an error message.
- If
- Keep using
setTimeoutwith the now-validateddelay.
No new imports or external libraries are required; we can use built-in Number.isNaN and standard operators.
-
Copy modified lines R136-R138 -
Copy modified lines R140-R147
| @@ -133,8 +133,18 @@ | ||
|
|
||
| // Slow route to test long-running requests | ||
| app.get('/slow', (req, res) => { | ||
| const delay = parseInt(req.query.delay) || 2000; | ||
| const rawDelay = req.query.delay; | ||
| const delay = rawDelay !== undefined ? parseInt(rawDelay, 10) : 2000; | ||
| const MAX_DELAY_MS = 10000; | ||
|
|
||
| if (!Number.isFinite(delay) || Number.isNaN(delay) || delay < 0 || delay > MAX_DELAY_MS) { | ||
| return res.status(400).json({ | ||
| error: 'Invalid delay', | ||
| message: `Delay must be an integer between 0 and ${MAX_DELAY_MS} milliseconds`, | ||
| trace_id: getCurrentTraceId(), | ||
| }); | ||
| } | ||
|
|
||
| setTimeout(() => { | ||
| res.json({ | ||
| message: `Delayed response after ${delay}ms`, |
| app.get('/api/pg/users', async (req, res) => { | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('pg.get_all_users'); | ||
|
|
||
| try { | ||
| const result = await pgPool.query('SELECT id, name, email FROM users LIMIT 10'); | ||
|
|
||
| span.setAttribute('db.rows_returned', result.rows.length); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.json({ | ||
| users: result.rows, | ||
| count: result.rows.length, | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, the problem is fixed by adding rate-limiting middleware so that each client can only call expensive routes a limited number of times per time window. In Express, a common approach is to use the express-rate-limit package, configure a limiter, and apply it either globally (app.use(limiter)) or per-route (app.get('/path', limiter, handler)).
For this specific code, the least invasive and clearest fix is to add express-rate-limit at the top of db-examples.js, configure a limiter appropriate for database-backed endpoints (e.g. 100 requests per 15 minutes per IP), and then apply that limiter specifically to the PostgreSQL routes that touch the database. This avoids changing existing behavior except when a client exceeds the defined limit, and it doesn’t require altering the database logic itself. Concretely: (1) add const rateLimit = require('express-rate-limit'); near the other require calls, (2) define a dbRateLimiter instance after the app is created, and (3) change the PostgreSQL route definitions to include dbRateLimiter as middleware, e.g. app.get('/api/pg/users', dbRateLimiter, async (req, res) => { ... }) and similarly for app.post('/api/pg/users', dbRateLimiter, ...).
-
Copy modified line R12 -
Copy modified lines R19-R24 -
Copy modified line R44 -
Copy modified line R72
| @@ -9,12 +9,19 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const rateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Rate limiter for database-backed routes | ||
| const dbRateLimiter = rateLimit({ | ||
| windowMs: 15 * 60 * 1000, // 15 minutes | ||
| max: 100, // limit each IP to 100 requests per windowMs | ||
| }); | ||
|
|
||
| // Helper function to get current trace ID | ||
| function getTraceId() { | ||
| const span = opentelemetry.trace.getSpan(opentelemetry.context.active()); | ||
| @@ -40,7 +41,7 @@ | ||
| }); | ||
|
|
||
| // 1. PostgreSQL - Simple SELECT query | ||
| app.get('/api/pg/users', async (req, res) => { | ||
| app.get('/api/pg/users', dbRateLimiter, async (req, res) => { | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('pg.get_all_users'); | ||
|
|
||
| @@ -68,7 +69,7 @@ | ||
| }); | ||
|
|
||
| // 2. PostgreSQL - Parameterized query with transaction | ||
| app.post('/api/pg/users', async (req, res) => { | ||
| app.post('/api/pg/users', dbRateLimiter, async (req, res) => { | ||
| const { name, email } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('pg.create_user'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
| app.post('/api/pg/users', async (req, res) => { | ||
| const { name, email } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('pg.create_user'); | ||
|
|
||
| const client = await pgPool.connect(); | ||
|
|
||
| try { | ||
| await client.query('BEGIN'); | ||
|
|
||
| const insertQuery = 'INSERT INTO users(name, email, created_at) VALUES($1, $2, NOW()) RETURNING id, name, email'; | ||
| const result = await client.query(insertQuery, [name, email]); | ||
|
|
||
| // Simulate audit log insert | ||
| await client.query('INSERT INTO audit_log(action, user_id, timestamp) VALUES($1, $2, NOW())', | ||
| ['user_created', result.rows[0].id]); | ||
|
|
||
| await client.query('COMMIT'); | ||
|
|
||
| span.setAttribute('user.id', result.rows[0].id); | ||
| span.setAttribute('user.email', email); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.status(201).json({ | ||
| user: result.rows[0], | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| await client.query('ROLLBACK'); | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| client.release(); | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix missing rate limiting on an Express route that performs expensive work (like DB access), you add a rate-limiting middleware (e.g., express-rate-limit) and apply it either globally or specifically to the sensitive route(s). This constrains how many requests a client (typically defined by IP) can make in a given time window, reducing DoS risk.
For this specific code, the least-invasive and most targeted fix is:
- Import
express-rate-limitat the top ofdb-examples.js. - Define a limiter instance configured for write-heavy/database-intensive routes (for example, relatively small
maxper time window). - Apply that limiter only to the
POST /api/pg/usersroute (the one flagged by CodeQL) by passing it as middleware before the async handler inapp.post(...).
This preserves existing route behavior for allowed requests and only adds HTTP 429 responses when a client exceeds the configured rate. No changes to the DB logic, tracing, or response schemas are required.
Concretely:
- Add
const RateLimit = require('express-rate-limit');after the otherrequirestatements. - Define
const writeLimiter = RateLimit({ ... })after theappandportdeclarations, but before routes. - Change the
app.post('/api/pg/users', async (req, res) => { ... })signature toapp.post('/api/pg/users', writeLimiter, async (req, res) => { ... }).
-
Copy modified line R12 -
Copy modified lines R17-R22 -
Copy modified line R74
| @@ -9,10 +9,17 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const RateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| // Rate limiter for write/database-intensive routes | ||
| const writeLimiter = RateLimit({ | ||
| windowMs: 15 * 60 * 1000, // 15 minutes | ||
| max: 100, // limit each IP to 100 write requests per windowMs | ||
| }); | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Helper function to get current trace ID | ||
| @@ -68,7 +71,7 @@ | ||
| }); | ||
|
|
||
| // 2. PostgreSQL - Parameterized query with transaction | ||
| app.post('/api/pg/users', async (req, res) => { | ||
| app.post('/api/pg/users', writeLimiter, async (req, res) => { | ||
| const { name, email } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('pg.create_user'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
| app.get('/api/mysql/orders/:userId', async (req, res) => { | ||
| const { userId } = req.params; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('mysql.get_user_orders'); | ||
| span.setAttribute('user.id', userId); | ||
|
|
||
| try { | ||
| const [rows] = await mysqlPool.execute( | ||
| `SELECT o.id, o.total, o.status, o.created_at, p.name as product_name | ||
| FROM orders o | ||
| JOIN order_items oi ON o.id = oi.order_id | ||
| JOIN products p ON oi.product_id = p.id | ||
| WHERE o.user_id = ? | ||
| ORDER BY o.created_at DESC | ||
| LIMIT 20`, | ||
| [userId] | ||
| ); | ||
|
|
||
| span.setAttribute('db.rows_returned', rows.length); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.json({ | ||
| orders: rows, | ||
| count: rows.length, | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix missing rate limiting you should introduce a rate-limiting middleware (for example using express-rate-limit) and apply it either globally (app.use(limiter)) or to particular expensive routes (app.get(path, limiter, handler) / app.post(...)). This middleware will cap the number of requests an individual client can make over a time window, reducing the risk of denial-of-service through excessive DB calls.
For this specific code, the least intrusive and clearest fix is to:
- Import and configure
express-rate-limitonce near the top ofdb-examples.js. - Create a limiter instance suitable for these DB operations (e.g., a reasonable per-IP ceiling per time window).
- Apply that limiter specifically to the MySQL routes that perform database access, by inserting it as a middleware parameter before the async handlers. This avoids changing any existing functionality inside the handlers and keeps the tracing and DB logic intact.
Concretely:
- Add
const rateLimit = require('express-rate-limit');near the otherrequirecalls. - Define a limiter constant such as
const dbRateLimiter = rateLimit({ windowMs: 15 * 60 * 1000, max: 100 });afterconst app = express();. - Update
app.get('/api/mysql/orders/:userId', async (req, res) => { ... })toapp.get('/api/mysql/orders/:userId', dbRateLimiter, async (req, res) => { ... }). - Similarly, rate-limit the batch-insert endpoint
app.post('/api/mysql/products/bulk', ...)with the same limiter (or a stricter one) to protect the DB from heavy write load.
-
Copy modified line R12 -
Copy modified lines R17-R22 -
Copy modified line R131 -
Copy modified line R170
| @@ -9,10 +9,17 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const rateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| // Rate limiter for database-intensive routes | ||
| const dbRateLimiter = rateLimit({ | ||
| windowMs: 15 * 60 * 1000, // 15 minutes | ||
| max: 100, // limit each IP to 100 requests per windowMs | ||
| }); | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Helper function to get current trace ID | ||
| @@ -125,7 +128,7 @@ | ||
| }); | ||
|
|
||
| // 3. MySQL - SELECT with JOIN | ||
| app.get('/api/mysql/orders/:userId', async (req, res) => { | ||
| app.get('/api/mysql/orders/:userId', dbRateLimiter, async (req, res) => { | ||
| const { userId } = req.params; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('mysql.get_user_orders'); | ||
| @@ -164,7 +167,7 @@ | ||
| }); | ||
|
|
||
| // 4. MySQL - Batch INSERT | ||
| app.post('/api/mysql/products/bulk', async (req, res) => { | ||
| app.post('/api/mysql/products/bulk', dbRateLimiter, async (req, res) => { | ||
| const { products } = req.body; // Array of {name, price, stock} | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('mysql.bulk_insert_products'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
| app.post('/api/mysql/products/bulk', async (req, res) => { | ||
| const { products } = req.body; // Array of {name, price, stock} | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('mysql.bulk_insert_products'); | ||
| span.setAttribute('products.count', products.length); | ||
|
|
||
| const connection = await mysqlPool.getConnection(); | ||
|
|
||
| try { | ||
| await connection.beginTransaction(); | ||
|
|
||
| const insertQuery = 'INSERT INTO products (name, price, stock, created_at) VALUES ?'; | ||
| const values = products.map(p => [p.name, p.price, p.stock, new Date()]); | ||
|
|
||
| const [result] = await connection.query(insertQuery, [values]); | ||
|
|
||
| await connection.commit(); | ||
|
|
||
| span.setAttribute('db.rows_inserted', result.affectedRows); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.status(201).json({ | ||
| inserted: result.affectedRows, | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| await connection.rollback(); | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| connection.release(); | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, the problem is best fixed by adding a rate-limiting middleware to the Express application for routes that perform expensive operations like database access. A widely used solution is the express-rate-limit package, which can be configured with reasonable defaults and applied either globally or per-route. This ensures the application will only process a bounded number of requests per client over a configured time window, mitigating denial-of-service risks.
For this specific code, the minimal, non-breaking fix is to import express-rate-limit, configure a limiter instance, and apply it to the /api/mysql/products/bulk POST route. This avoids altering the existing behavior of the handler itself; only the number of allowed requests per client in a time window is constrained. Concretely, in nodejs/node10-express/db-examples.js, we will (1) add a require('express-rate-limit') near the top, (2) define a limiter configuration tuned for bulk operations (e.g., relatively low max requests per 15 minutes), and (3) attach this limiter as middleware in the app.post('/api/mysql/products/bulk', ...) definition by inserting it as an argument before the async handler.
-
Copy modified line R12 -
Copy modified lines R19-R24 -
Copy modified line R168
| @@ -9,12 +9,19 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const RateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Rate limiter for expensive bulk operations | ||
| const bulkInsertLimiter = RateLimit({ | ||
| windowMs: 15 * 60 * 1000, // 15 minutes | ||
| max: 50, // limit each IP to 50 bulk insert requests per windowMs | ||
| }); | ||
|
|
||
| // Helper function to get current trace ID | ||
| function getTraceId() { | ||
| const span = opentelemetry.trace.getSpan(opentelemetry.context.active()); | ||
| @@ -164,7 +165,7 @@ | ||
| }); | ||
|
|
||
| // 4. MySQL - Batch INSERT | ||
| app.post('/api/mysql/products/bulk', async (req, res) => { | ||
| app.post('/api/mysql/products/bulk', bulkInsertLimiter, async (req, res) => { | ||
| const { products } = req.body; // Array of {name, price, stock} | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('mysql.bulk_insert_products'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
| app.put('/api/mongo/cart/:userId', async (req, res) => { | ||
| const { userId } = req.params; | ||
| const { items } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('mongo.update_cart'); | ||
| span.setAttribute('user.id', userId); | ||
|
|
||
| try { | ||
| const db = await connectMongo(); | ||
| const collection = db.collection('carts'); | ||
|
|
||
| const result = await collection.updateOne( | ||
| { userId }, | ||
| { | ||
| $set: { | ||
| items, | ||
| updatedAt: new Date() | ||
| }, | ||
| $setOnInsert: { | ||
| userId, | ||
| createdAt: new Date() | ||
| } | ||
| }, | ||
| { upsert: true } | ||
| ); | ||
|
|
||
| span.setAttribute('mongo.upserted', result.upsertedCount > 0); | ||
| span.setAttribute('mongo.modified', result.modifiedCount); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.json({ | ||
| updated: result.modifiedCount > 0, | ||
| upserted: result.upsertedCount > 0, | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix missing rate limiting on an expensive route, introduce a rate-limiting middleware (either globally or per-route) that constrains the number of requests an individual client can make within a time window. In Express, a common approach is to use the well-known express-rate-limit package and either apply it to all routes (app.use(limiter)) or to specific sensitive routes (app.put('/path', limiter, handler)).
For this specific case, the minimal, non-breaking fix is to add express-rate-limit as a dependency, require it near the top of db-examples.js, create a limiter configuration appropriate for database operations (e.g., some number of requests per minute per IP), and then apply that limiter specifically to the /api/mongo/cart/:userId route. This avoids changing the behavior of unrelated routes in this example file and keeps existing functionality intact while protecting the MongoDB upsert endpoint from easy abuse. Concretely: above the route definitions, add const rateLimit = require('express-rate-limit'); and define a cartUpdateLimiter limiter (e.g., windowMs: 1 minute, max: 60). Then change app.put('/api/mongo/cart/:userId', async (req, res) => { ... }) to insert cartUpdateLimiter as a middleware: app.put('/api/mongo/cart/:userId', cartUpdateLimiter, async (req, res) => { ... }).
-
Copy modified line R12 -
Copy modified lines R19-R24 -
Copy modified line R322
| @@ -9,12 +9,19 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const rateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Rate limiter for cart updates to protect MongoDB | ||
| const cartUpdateLimiter = rateLimit({ | ||
| windowMs: 60 * 1000, // 1 minute | ||
| max: 60, // limit each IP to 60 requests per windowMs | ||
| }); | ||
|
|
||
| // Helper function to get current trace ID | ||
| function getTraceId() { | ||
| const span = opentelemetry.trace.getSpan(opentelemetry.context.active()); | ||
| @@ -318,7 +319,7 @@ | ||
| }); | ||
|
|
||
| // 7. MongoDB - Update with upsert | ||
| app.put('/api/mongo/cart/:userId', async (req, res) => { | ||
| app.put('/api/mongo/cart/:userId', cartUpdateLimiter, async (req, res) => { | ||
| const { userId } = req.params; | ||
| const { items } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
| app.get('/api/cache/user/:id', async (req, res) => { | ||
| const { id } = req.params; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('cache.get_user'); | ||
| span.setAttribute('user.id', id); | ||
|
|
||
| const cacheKey = `user:${id}`; | ||
|
|
||
| try { | ||
| // Try to get from cache first | ||
| const cached = await redisClient.get(cacheKey); | ||
|
|
||
| if (cached) { | ||
| span.setAttribute('cache.hit', true); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.json({ | ||
| user: JSON.parse(cached), | ||
| source: 'cache', | ||
| traceId: getTraceId(), | ||
| }); | ||
| span.end(); | ||
| return; | ||
| } | ||
|
|
||
| span.setAttribute('cache.hit', false); | ||
|
|
||
| // Cache miss - fetch from database | ||
| const result = await pgPool.query('SELECT * FROM users WHERE id = $1', [id]); | ||
|
|
||
| if (result.rows.length === 0) { | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
| res.status(404).json({ error: 'User not found', traceId: getTraceId() }); | ||
| span.end(); | ||
| return; | ||
| } | ||
|
|
||
| const user = result.rows[0]; | ||
|
|
||
| // Store in cache for 5 minutes | ||
| await redisClient.setEx(cacheKey, 300, JSON.stringify(user)); | ||
|
|
||
| span.setAttribute('cache.stored', true); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.json({ | ||
| user, | ||
| source: 'database', | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
This route handler performs
a database access
This route handler performs
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix missing rate limiting on an expensive route, you introduce a rate-limiting mechanism (middleware or inline logic) that tracks per-client usage over a time window and rejects requests that exceed a threshold with HTTP 429, while leaving allowed requests’ behavior unchanged. This can be implemented via a library such as express-rate-limit or via a custom Redis-based limiter, especially appropriate here because Redis is already in use.
The best fix here, without changing existing functionality, is to apply a lightweight, per-IP rate limit specifically to /api/cache/user/:id, using the same Redis client that the file already configures. That avoids new external dependencies and keeps behavior consistent. We can factor out a small helper that enforces a configurable limit and call it at the top of the cache route, returning 429 before performing any Redis get or PostgreSQL query if the client exceeds the limit. We’ll mirror the semantics of the existing /api/rate-limited/resource endpoint: using INCR, setting an expiry for the window, recording telemetry attributes, and adding standard rate-limit headers, but adapt it to be callable from within handlers. Concretely:
- Above the cache route, define an
async function checkRateLimit(redisClient, keyPrefix, clientIp, maxRequests, windowSeconds, span)that:- Builds a Redis key (e.g.,
${keyPrefix}:${clientIp}). INCRs the counter and sets expiry on first use.- Retrieves TTL.
- Populates span attributes.
- Returns an object such as
{ allowed, current, ttl, maxRequests }instead of directly sending a response.
- Builds a Redis key (e.g.,
- In
/api/cache/user/:id, right after starting the span and before any Redis/DB calls:- Call
checkRateLimit(redisClient, 'rate:cache_user', clientIp, 100, 60, span)(or similar). - If
!allowed, respond with429and appropriate headers and JSON, end the span, andreturn. - If allowed, set rate-limit headers and proceed exactly as before with the Redis get / PostgreSQL query / Redis setEx logic.
- Call
All changes are confined to nodejs/node10-express/db-examples.js in the shown region; no new imports are needed because we reuse the existing redisClient and opentelemetry objects.
-
Copy modified lines R386-R423 -
Copy modified line R427 -
Copy modified line R431 -
Copy modified lines R434-R435 -
Copy modified lines R438-R464
| @@ -383,16 +383,85 @@ | ||
| await redisClient.connect(); | ||
| })(); | ||
|
|
||
| /** | ||
| * Simple Redis-based rate limiter. | ||
| * | ||
| * @param {import('redis').RedisClientType} redisClient | ||
| * @param {string} keyPrefix - Prefix for the Redis key (e.g., "rate:cache_user") | ||
| * @param {string} clientIp - Client identifier (usually IP address) | ||
| * @param {number} maxRequests - Maximum allowed requests in the window | ||
| * @param {number} windowSeconds - Duration of the window in seconds | ||
| * @param {import('@opentelemetry/api').Span} span - Current tracing span | ||
| * @returns {Promise<{ allowed: boolean, current: number, ttl: number, maxRequests: number }>} | ||
| */ | ||
| async function checkRateLimit(redisClient, keyPrefix, clientIp, maxRequests, windowSeconds, span) { | ||
| const rateLimitKey = `${keyPrefix}:${clientIp}`; | ||
|
|
||
| const current = await redisClient.incr(rateLimitKey); | ||
|
|
||
| if (current === 1) { | ||
| await redisClient.expire(rateLimitKey, windowSeconds); | ||
| } | ||
|
|
||
| const ttl = await redisClient.ttl(rateLimitKey); | ||
|
|
||
| if (span) { | ||
| span.setAttribute('rate_limit.current', current); | ||
| span.setAttribute('rate_limit.max', maxRequests); | ||
| span.setAttribute('rate_limit.remaining', Math.max(0, maxRequests - current)); | ||
| span.setAttribute('rate_limit.window_seconds', windowSeconds); | ||
| } | ||
|
|
||
| const allowed = current <= maxRequests; | ||
|
|
||
| if (span) { | ||
| span.setAttribute('rate_limit.exceeded', !allowed); | ||
| } | ||
|
|
||
| return { allowed, current, ttl, maxRequests }; | ||
| } | ||
|
|
||
| // 8. Redis - Cache-aside pattern | ||
| app.get('/api/cache/user/:id', async (req, res) => { | ||
| const { id } = req.params; | ||
| const clientIp = req.ip; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('cache.get_user'); | ||
| span.setAttribute('user.id', id); | ||
| span.setAttribute('client.ip', clientIp); | ||
|
|
||
| const cacheKey = `user:${id}`; | ||
| const maxRequests = 100; | ||
| const windowSeconds = 60; | ||
|
|
||
| try { | ||
| // Rate limiting per client IP for this cache endpoint | ||
| const rateInfo = await checkRateLimit( | ||
| redisClient, | ||
| 'rate:cache_user', | ||
| clientIp, | ||
| maxRequests, | ||
| windowSeconds, | ||
| span | ||
| ); | ||
|
|
||
| res.set('X-RateLimit-Limit', rateInfo.maxRequests); | ||
| res.set('X-RateLimit-Remaining', Math.max(0, rateInfo.maxRequests - rateInfo.current)); | ||
| if (rateInfo.ttl >= 0) { | ||
| res.set('X-RateLimit-Reset', Date.now() + (rateInfo.ttl * 1000)); | ||
| } | ||
|
|
||
| if (!rateInfo.allowed) { | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
| res.status(429).json({ | ||
| error: 'Rate limit exceeded', | ||
| retryAfter: rateInfo.ttl, | ||
| traceId: getTraceId(), | ||
| }); | ||
| span.end(); | ||
| return; | ||
| } | ||
|
|
||
| // Try to get from cache first | ||
| const cached = await redisClient.get(cacheKey); | ||
|
|
| app.get('/api/rate-limited/resource', async (req, res) => { | ||
| const clientIp = req.ip; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('rate_limit.check'); | ||
| span.setAttribute('client.ip', clientIp); | ||
|
|
||
| const rateLimitKey = `rate:${clientIp}`; | ||
| const maxRequests = 10; | ||
| const windowSeconds = 60; | ||
|
|
||
| try { | ||
| const current = await redisClient.incr(rateLimitKey); | ||
|
|
||
| if (current === 1) { | ||
| await redisClient.expire(rateLimitKey, windowSeconds); | ||
| } | ||
|
|
||
| const ttl = await redisClient.ttl(rateLimitKey); | ||
|
|
||
| span.setAttribute('rate_limit.current', current); | ||
| span.setAttribute('rate_limit.max', maxRequests); | ||
| span.setAttribute('rate_limit.remaining', Math.max(0, maxRequests - current)); | ||
|
|
||
| if (current > maxRequests) { | ||
| span.setAttribute('rate_limit.exceeded', true); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.status(429).json({ | ||
| error: 'Rate limit exceeded', | ||
| retryAfter: ttl, | ||
| traceId: getTraceId(), | ||
| }); | ||
| span.end(); | ||
| return; | ||
| } | ||
|
|
||
| span.setAttribute('rate_limit.exceeded', false); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.set('X-RateLimit-Limit', maxRequests); | ||
| res.set('X-RateLimit-Remaining', maxRequests - current); | ||
| res.set('X-RateLimit-Reset', Date.now() + (ttl * 1000)); | ||
|
|
||
| res.json({ | ||
| message: 'Request successful', | ||
| rateLimit: { | ||
| limit: maxRequests, | ||
| remaining: maxRequests - current, | ||
| reset: ttl | ||
| }, | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
| res.status(500).json({ error: error.message, traceId: getTraceId() }); | ||
| } finally { | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
This route handler performs
a database access
This route handler performs
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix this kind of issue, you introduce a rate-limiting middleware that runs before the handler and caps the number of requests per client over a time window. In Express, this is commonly done with express-rate-limit. The middleware can be applied globally (app.use(limiter)) or per-route (app.get('/path', limiter, handler)); per-route application avoids affecting unrelated endpoints.
For this file, the least intrusive fix that preserves existing behavior is:
- Add a
require('express-rate-limit')near the other top-level imports. - Configure a small, generic limiter (e.g., 60 requests per minute) that uses in-memory storage (default) so it does not depend on Redis and does not interfere with the internal Redis-based rate-limiting logic.
- Apply this middleware only to
/api/rate-limited/resourceby adding it as the first argument after the path string:
app.get('/api/rate-limited/resource', rateLimiter, async (req, res) => { ... }).
This adds the required rate limiting in front of the Redis calls, satisfies CodeQL’s rule, and leaves the rest of the logic (including the demonstration Redis-based rate limiting and telemetry) unchanged.
-
Copy modified line R12 -
Copy modified lines R17-R23 -
Copy modified line R454
| @@ -9,10 +9,18 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const RateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| const rateLimiter = RateLimit({ | ||
| windowMs: 60 * 1000, // 1 minute | ||
| max: 60, // limit each IP to 60 requests per windowMs | ||
| standardHeaders: true, | ||
| legacyHeaders: false, | ||
| }); | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Helper function to get current trace ID | ||
| @@ -447,7 +451,7 @@ | ||
| }); | ||
|
|
||
| // 9. Redis - Rate limiting | ||
| app.get('/api/rate-limited/resource', async (req, res) => { | ||
| app.get('/api/rate-limited/resource', rateLimiter, async (req, res) => { | ||
| const clientIp = req.ip; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('rate_limit.check'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
| try { | ||
| // Make multiple parallel API calls | ||
| const [userProfile, userPosts, userTodos] = await Promise.all([ | ||
| axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`), |
Check failure
Code scanning / CodeQL
Server-side request forgery Critical
URL
user-provided value
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, to fix this class of issue you should never pass raw user input directly into the URL of an outgoing HTTP request. Instead, validate and normalize the input, and/or map it to a controlled value (for example, numeric IDs or values from an allow-list) before interpolating it into the path or query string.
For this endpoint, the simplest, non‑breaking fix is to constrain userId to a safe format before using it in the axios calls. The JSONPlaceholder API expects numeric IDs for users, posts, and todos, so we can parse the userId path parameter into an integer, ensure it is a positive finite number, and reject the request with a 400 error if validation fails. We then use this validated numericUserId when constructing the URLs. This preserves existing functionality for legitimate callers while preventing arbitrary strings from reaching the request URL.
Concretely, in nodejs/node10-express/db-examples.js within the /api/external/user-dashboard/:userId handler:
- Introduce validation logic right after extracting
userIdfromreq.params. Convert it to an integer withparseInt, and check that it is a positive integer (Number.isInteger,> 0,Number.isFinite). - If validation fails, respond with HTTP 400 and do not make any external calls.
- Replace all uses of
userIdin the axios URLs with the validatednumericUserId. - Optionally, set a span attribute
user.idusing the validated value instead of the raw parameter.
No new helper methods or imports are required; we can use plain JavaScript number parsing and checks.
-
Copy modified lines R521-R529 -
Copy modified line R532 -
Copy modified lines R537-R539
| @@ -518,16 +518,25 @@ | ||
| // 10. Axios - Multiple parallel API calls | ||
| app.get('/api/external/user-dashboard/:userId', async (req, res) => { | ||
| const { userId } = req.params; | ||
| const numericUserId = Number.parseInt(userId, 10); | ||
|
|
||
| if (!Number.isFinite(numericUserId) || !Number.isInteger(numericUserId) || numericUserId <= 0) { | ||
| return res.status(400).json({ | ||
| error: 'Invalid userId; expected a positive integer.', | ||
| traceId: getTraceId(), | ||
| }); | ||
| } | ||
|
|
||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('external.fetch_user_dashboard'); | ||
| span.setAttribute('user.id', userId); | ||
| span.setAttribute('user.id', numericUserId); | ||
|
|
||
| try { | ||
| // Make multiple parallel API calls | ||
| const [userProfile, userPosts, userTodos] = await Promise.all([ | ||
| axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`), | ||
| axios.get(`https://jsonplaceholder.typicode.com/posts?userId=${userId}`), | ||
| axios.get(`https://jsonplaceholder.typicode.com/todos?userId=${userId}`), | ||
| axios.get(`https://jsonplaceholder.typicode.com/users/${numericUserId}`), | ||
| axios.get(`https://jsonplaceholder.typicode.com/posts?userId=${numericUserId}`), | ||
| axios.get(`https://jsonplaceholder.typicode.com/todos?userId=${numericUserId}`), | ||
| ]); | ||
|
|
||
| span.setAttribute('api.calls_completed', 3); |
| app.post('/api/orders/process', async (req, res) => { | ||
| const { userId, items } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('order.process_complete_workflow'); | ||
| span.setAttribute('user.id', userId); | ||
| span.setAttribute('items.count', items.length); | ||
|
|
||
| const pgClient = await pgPool.connect(); | ||
|
|
||
| try { | ||
| await pgClient.query('BEGIN'); | ||
|
|
||
| // 1. Check user exists (PostgreSQL) | ||
| const userResult = await pgClient.query('SELECT id, email FROM users WHERE id = $1', [userId]); | ||
| if (userResult.rows.length === 0) { | ||
| throw new Error('User not found'); | ||
| } | ||
| const user = userResult.rows[0]; | ||
|
|
||
| // 2. Check inventory (MySQL) | ||
| const [inventory] = await mysqlPool.execute( | ||
| 'SELECT id, stock FROM products WHERE id IN (?)', | ||
| [items.map(i => i.productId)] | ||
| ); | ||
|
|
||
| for (const item of items) { | ||
| const product = inventory.find(p => p.id === item.productId); | ||
| if (!product || product.stock < item.quantity) { | ||
| throw new Error(`Insufficient stock for product ${item.productId}`); | ||
| } | ||
| } | ||
|
|
||
| // 3. Calculate total and create order (PostgreSQL) | ||
| const total = items.reduce((sum, item) => sum + (item.price * item.quantity), 0); | ||
| const orderResult = await pgClient.query( | ||
| 'INSERT INTO orders(user_id, total, status, created_at) VALUES($1, $2, $3, NOW()) RETURNING id', | ||
| [userId, total, 'pending'] | ||
| ); | ||
| const orderId = orderResult.rows[0].id; | ||
|
|
||
| // 4. Update inventory (MySQL) | ||
| const mysqlConn = await mysqlPool.getConnection(); | ||
| await mysqlConn.beginTransaction(); | ||
|
|
||
| for (const item of items) { | ||
| await mysqlConn.execute( | ||
| 'UPDATE products SET stock = stock - ? WHERE id = ?', | ||
| [item.quantity, item.productId] | ||
| ); | ||
| } | ||
|
|
||
| await mysqlConn.commit(); | ||
| mysqlConn.release(); | ||
|
|
||
| // 5. Store order details in MongoDB | ||
| const db = await connectMongo(); | ||
| await db.collection('order_details').insertOne({ | ||
| orderId, | ||
| userId, | ||
| items, | ||
| total, | ||
| status: 'pending', | ||
| createdAt: new Date() | ||
| }); | ||
|
|
||
| // 6. Send notification via external API | ||
| try { | ||
| await axios.post('https://jsonplaceholder.typicode.com/posts', { | ||
| title: 'Order Confirmation', | ||
| body: `Order #${orderId} created for ${user.email}`, | ||
| userId: 1 | ||
| }); | ||
| } catch (notificationError) { | ||
| // Log but don't fail the order | ||
| console.error('Notification failed:', notificationError.message); | ||
| } | ||
|
|
||
| // 7. Clear user's cart from Redis | ||
| await redisClient.del(`cart:${userId}`); | ||
|
|
||
| await pgClient.query('COMMIT'); | ||
|
|
||
| span.setAttribute('order.id', orderId); | ||
| span.setAttribute('order.total', total); | ||
| span.setAttribute('order.status', 'success'); | ||
| span.setStatus({ code: opentelemetry.SpanStatusCode.OK }); | ||
|
|
||
| res.status(201).json({ | ||
| success: true, | ||
| orderId, | ||
| total, | ||
| items: items.length, | ||
| traceId: getTraceId(), | ||
| }); | ||
| } catch (error) { | ||
| await pgClient.query('ROLLBACK'); | ||
|
|
||
| span.setAttribute('order.status', 'failed'); | ||
| span.recordException(error); | ||
| span.setStatus({ | ||
| code: opentelemetry.SpanStatusCode.ERROR, | ||
| message: error.message | ||
| }); | ||
|
|
||
| res.status(500).json({ | ||
| error: error.message, | ||
| traceId: getTraceId() | ||
| }); | ||
| } finally { | ||
| pgClient.release(); | ||
| span.end(); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
This route handler performs
a database access
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
In general, the problem is best fixed by adding a rate‑limiting middleware to the Express application, either globally or specifically on routes that perform heavy database or external-service operations. A standard library such as express-rate-limit can be used to cap the number of requests per IP within a time window, limiting the potential for denial‑of‑service through excessive calls to the order-processing workflow.
For this codebase, the least invasive fix that preserves existing behavior is to: (1) import express-rate-limit near the top of nodejs/node10-express/db-examples.js, (2) define a rate limiter configuration dedicated to order processing (e.g., a modest per‑IP cap per minute), and (3) apply this limiter as middleware on the /api/orders/process route only. This avoids changing behavior of unrelated routes such as /health while protecting the expensive multi‑DB workflow. Concretely, add const rateLimit = require('express-rate-limit'); close to the other require calls, define const orderLimiter = rateLimit({...}) somewhere after the app is created, and change the route definition from app.post('/api/orders/process', async (req, res) => { ... }) to app.post('/api/orders/process', orderLimiter, async (req, res) => { ... }). No other logic in the handler needs to change.
-
Copy modified line R12 -
Copy modified lines R17-R24 -
Copy modified line R644
| @@ -9,10 +9,19 @@ | ||
| const { MongoClient } = require('mongodb'); | ||
| const redis = require('redis'); | ||
| const axios = require('axios'); | ||
| const rateLimit = require('express-rate-limit'); | ||
|
|
||
| const app = express(); | ||
| const port = process.env.PORT || 3000; | ||
|
|
||
| // Rate limiter for expensive order processing endpoint | ||
| const orderLimiter = rateLimit({ | ||
| windowMs: 1 * 60 * 1000, // 1 minute | ||
| max: 30, // limit each IP to 30 order processing requests per windowMs | ||
| standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers | ||
| legacyHeaders: false, // Disable the `X-RateLimit-*` headers | ||
| }); | ||
|
|
||
| app.use(express.json()); | ||
|
|
||
| // Helper function to get current trace ID | ||
| @@ -636,7 +641,7 @@ | ||
| // ============================================ | ||
|
|
||
| // 12. Complex workflow - Order processing with multiple DB operations | ||
| app.post('/api/orders/process', async (req, res) => { | ||
| app.post('/api/orders/process', orderLimiter, async (req, res) => { | ||
| const { userId, items } = req.body; | ||
| const tracer = opentelemetry.trace.getTracer('node10-express-example'); | ||
| const span = tracer.startSpan('order.process_complete_workflow'); |
-
Copy modified lines R22-R23
| @@ -19,7 +19,8 @@ | ||
| "@opentelemetry/resources": "1.3.1", | ||
| "@opentelemetry/semantic-conventions": "1.3.1", | ||
| "@opentelemetry/auto-instrumentations-node": "0.31.0", | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2" | ||
| "@opentelemetry/exporter-trace-otlp-http": "0.29.2", | ||
| "express-rate-limit": "^8.2.1" | ||
| }, | ||
| "optionalDependencies": { | ||
| "pg": "^8.7.1", |
| Package | Version | Security advisories |
| express-rate-limit (npm) | 8.2.1 | None |
No description provided.