-
Notifications
You must be signed in to change notification settings - Fork 15.9k
Description
Summary
The PHP Protobuf runtime (php/src/Google/Protobuf/Internal/*) accepts a crafted length-delimited field whose declared length varint becomes a negative integer after sign extension.
That negative length is passed directly into readRaw() / advance() without validation. This corrupts the internal cursor, so mergeFromString() never exits cleanly.
Observed outcomes:
- High-CPU infinite loop that never returns.
- Invalid offset access that triggers runtime warnings/errors.
A single malicious Protobuf message can cause this. Sending a few such requests in parallel can exhaust all PHP workers and cause a denial of service.
Environment
- OS: Ubuntu 24.04.3 LTS
- PHP: PHP 8.3.6 (NTS), Zend Engine 4.3.6, OPcache 8.3.6
- Repo under test: protocolbuffers/protobuf (latest git clone --depth=1)
Behavior / Security Impact
CLI mode
Calling mergeFromString() directly on the crafted payload in a standalone PHP script:
- Sometimes it immediately crashes with an "invalid string offset" style runtime error.
- Sometimes it locks into a tight infinite loop and never returns.
So parsing does NOT "throw a clean exception and exit safely." It either crashes or hangs.
Built-in PHP server mode
Run a minimal HTTP server using:
php -S 127.0.0.1:3000 rest_api_server.php
The server exposes POST /v1/events and just calls mergeFromString() on the request body.
Observed:
- The request handler (worker) goes into mergeFromString() and spins.
- PHP only kills that request after ~30 seconds via the per-request max execution time.
- During those ~30 seconds, that worker is fully consumed and cannot serve anything else.
- A few parallel malicious requests keep multiple workers busy for ~30 seconds each → remote unauthenticated DoS.
So:
- CLI: crash OR permanent loop.
- Server: worker is held ~30 seconds per request, then killed by timeout.
There is never a fast "throw → send 400 → free worker immediately."
Root Cause
-
Message::mergeFromString()
- Reads a length-delimited field size using CodedInputStream::readVarint32().
- Trusts that value as the field length with no validation (does not reject negative or absurdly large values).
- Passes that length directly to readRaw().
-
CodedInputStream::readVarint32()
- Reassembles the varint into a 32-bit integer.
- A crafted varint like 0xFA 0xFF 0xFF 0xFF 0x0F decodes to a huge unsigned value that becomes a NEGATIVE PHP int (e.g. -2147483648) due to sign extension.
- So the "length" can become negative.
-
CodedInputStream::readRaw($size) / advance($size)
- Accepts $size as-is, even if it’s negative.
- Tries to advance the internal cursor by that negative amount.
- This corrupts the cursor (rewinds it / points it somewhere invalid).
-
After that, readTag() either:
- Keeps re-reading the same bytes forever (tight loop, high CPU), OR
- Touches invalid string offsets and triggers runtime errors.
Result: mergeFromString() does not just "fail parse." It either hangs until something external kills it or crashes outright.
Reproduction Steps (PoC)
You will attach these three files in the issue:
- poc_setup.sh
- rest_api_server.php
- rest_api_attack.php
-
Run poc_setup.sh on Ubuntu 24.04.3 LTS with PHP 8.3.6, git, and composer.
poc_setup.sh does the following:
- clones protocolbuffers/protobuf with --depth=1
- runs composer install in protobuf/php
- creates protobuf/poc/
- writes two PoC scripts there:
- rest_api_server.php (minimal HTTP server)
- rest_api_attack.php (malicious client)
-
In terminal A:
cd protobuf/poc
php -S 127.0.0.1:3000 rest_api_server.phprest_api_server.php exposes POST /v1/events and does roughly:
$msg = new Google\Protobuf\Timestamp();
$msg->mergeFromString($rawBody);There is intentionally no extra validation.
-
In terminal B:
cd protobuf/poc
php rest_api_attack.php http://127.0.0.1:3000/v1/eventsrest_api_attack.php sends a Protobuf body where a length-delimited field’s length varint is something like \xFA\xFF\xFF\xFF\x0F.
That varint decodes to a negative length inside readVarint32(). -
Observed result:
-
The server worker never produces a normal response.
-
It burns CPU until PHP’s ~30s max execution time kills that request, logging something like:
Maximum execution time of 30 seconds exceeded in .../CodedInputStream.php ... -
During those ~30 seconds, that worker is completely tied up.
-
Sending several malicious requests in parallel ties up all workers → service-wide DoS.
-
Running the same payload directly in CLI (no server, just mergeFromString() in a script) produces either:
- an immediate crash ("invalid string offset"), OR
- a permanent tight loop.
-
Expected Behavior
- If the decoded field length is negative or larger than the remaining buffer, the runtime should immediately throw.
- readRaw() / advance($size) should refuse negative sizes.
- The caller should receive a normal exception, so it can return 400 Bad Request (or similar) and immediately free the worker.
Right now:
- 1 malicious request = either instant crash, or ~30 seconds of 100% CPU in one worker.
- A handful of concurrent requests can starve the entire PHP worker pool with no authentication.
Fix Request / Impact
Please harden the PHP runtime (CodedInputStream) so that:
- negative / absurd declared lengths are rejected before calling readRaw(),
- advance() refuses negative sizes,
- malformed inputs trigger a fast exception instead of a tight loop or a 30-second stall.