Skip to content

Referencing previous_request_id's can crash (500) the TTS endpoint #270

@nmontavon

Description

@nmontavon

Why
We split our Text generations into Sentences, then use request_id's to "stitch" them back together. This gives us the ability to provide the user with granular control over regenerations etc.

Scenario
We generate a Text with 3 Sentences, thus splitting it up into 3 "blocks": 1) 2) and 3).

At first, we fully generate the entire Text which means:

  1. is generated using no previous and next id's, only with next_text as info
  2. is generated using the previous_id of 1.0) and next_text
  3. is generated using 2 previous_id's ( 1.0) and 2.0) ) and no next_text

this works just fine. Let's mentally keep in mind when we generated the blocks and assume the results would be 1.0) 2.0) and 3.0).

Now, we decide to "regenerate" block 2). the following happens:

  1. is generated again using the previous_id of 1.0) and the next_id of 3.0)

ThIs also works fine. We then have a result of 1.0), 2.1) and 3.0).

The problem starts here:

If I now decide to "regenerate" 3), the following happens:

  1. gets generated again using the previous_id's of 1.0) and 2.1) (UPDATE: or just the previous_id of 2.1) alone) --> THIS crashes the endpoint and results in a 500:
Error in TTS SDK call: ElevenLabsError: Status code: 500
Body: "{\"status\": \"internal_server_error\", \"message\": \"Internal Server error. All such crashes are reported to us automatically.\"}"
    at TextToSpeech.<anonymous> (/Users/user/VSCode/project/front/node_modules/@elevenlabs/elevenlabs-js/api/resources/textToSpeech/client/Client.js:369:31)
    at Generator.next (<anonymous>)
    at fulfilled (/Users/user/VSCode/project/front/node_modules/@elevenlabs/elevenlabs-js/api/resources/textToSpeech/client/Client.js:41:58)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5) {
  statusCode: 500,
  body: '{"status": "internal_server_error", "message": "Internal Server error. All such crashes are reported to us automatically."}',
  rawResponse: {
    headers: Headers {
      date: 'Thu, 18 Sep 2025 08:16:32 GMT',
      server: 'uvicorn',
      'content-length': '123',
      'content-type': 'text/plain; charset=utf-8',
      'access-control-allow-origin': '*',
      'access-control-allow-headers': '*',
      'access-control-allow-methods': 'POST, PATCH, OPTIONS, DELETE, GET, PUT',
      'access-control-max-age': '600',
      'strict-transport-security': 'max-age=31536000; includeSubDomains',
      'x-trace-id': '57e5f3b07eaa40098638f1182bde9651',
      'x-region': 'us-central1',
      via: '1.1 google, 1.1 google',
      'alt-svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000'
    },
    redirected: false,
    status: 500,
    statusText: 'Internal Server Error',
    type: 'basic',
    url: 'https://api.elevenlabs.io/v1/text-to-speech/voice_id/stream/with-timestamps?enable_logging=true&output_format=pcm_48000'
  }
}
Error details: {
  name: 'Error',
  message: 'Status code: 500\n' +
    'Body: "{\\"status\\": \\"internal_server_error\\", \\"message\\": \\"Internal Server error. All such crashes are reported to us automatically.\\"}"',
  stack: 'Error: Status code: 500\n' +
    'Body: "{\\"status\\": \\"internal_server_error\\", \\"message\\": \\"Internal Server error. All such crashes are reported to us automatically.\\"}"\n' +
    '    at TextToSpeech.<anonymous> (/Users/user/VSCode/project/front/node_modules/@elevenlabs/elevenlabs-js/api/resources/textToSpeech/client/Client.js:369:31)\n' +
    '    at Generator.next (<anonymous>)\n' +
    '    at fulfilled (/Users/user/VSCode/project/front/node_modules/@elevenlabs/elevenlabs-js/api/resources/textToSpeech/client/Client.js:41:58)\n' +
    '    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)'
}
Error generating audio stream: Error: TTS processing failed: Status code: 500
Body: "{\"status\": \"internal_server_error\", \"message\": \"Internal Server error. All such crashes are reported to us automatically.\"}"
    at POST (/Users/user/VSCode/project/front/src/routes/api/elevenlabs-stream/+server.ts:165:15)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async render_endpoint (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:56:20)
    at async resolve (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/respond.js:460:23)
    at async eval (/Users/user/VSCode/project/front/src/hooks.server.ts:175:22)
    at async fn (file:///Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/exports/hooks/sequence.js:102:13)
    at async fn (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/respond.js:325:16)
    at async internal_respond (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/respond.js:307:22)
    at async file:///Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:562:22
Error details: {
  name: 'Error',
  message: 'TTS processing failed: Status code: 500\n' +
    'Body: "{\\"status\\": \\"internal_server_error\\", \\"message\\": \\"Internal Server error. All such crashes are reported to us automatically.\\"}"',
  stack: 'Error: TTS processing failed: Status code: 500\n' +
    'Body: "{\\"status\\": \\"internal_server_error\\", \\"message\\": \\"Internal Server error. All such crashes are reported to us automatically.\\"}"\n' +
    '    at POST (/Users/user/VSCode/project/front/src/routes/api/elevenlabs-stream/+server.ts:165:15)\n' +
    '    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n' +
    '    at async render_endpoint (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:56:20)\n' +
    '    at async resolve (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/respond.js:460:23)\n' +
    '    at async eval (/Users/user/VSCode/project/front/src/hooks.server.ts:175:22)\n' +
    '    at async fn (file:///Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/exports/hooks/sequence.js:102:13)\n' +
    '    at async fn (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/respond.js:325:16)\n' +
    '    at async internal_respond (/Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/runtime/server/respond.js:307:22)\n' +
    '    at async file:///Users/user/VSCode/project/front/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:562:22'
}

Note: Some info has been anonymised in this log.

This can be replicated in both the elevenlabs-js npm module, as well as interfacing with the plain api.

The Problem seems to be temporally mixing the two generations, but I do not see why that would not be allowed? Please explain how you would expect this to work, thank you.

Info
Technically not relevant, as it happens with both the http endpoint as well as the js library, but here is some info about what was used:

Endpoint: https://api.elevenlabs.io/v1/text-to-speech/:voice_id/stream
Model: eleven_multilingual_v2

JS library version: latest, 2.15.0
Runtime: both node and bun produce the same result
Framework: SvelteKit 2.42.1, Svelte 5.39.2, Typescript 5.9.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions