Replies: 3 comments 1 reply
-
you can see what I have done in this post |
Beta Was this translation helpful? Give feedback.
-
I found a solution that works for me. I'm not sure if it's the best one, but it does the job. I write the content for the webpage into a file first. Once the file is complete, I send it to the client. With this approach, the full content is no longer held in RAM, which helps avoid memory allocation issues when dealing with large pages. File file = LittleFS.open("/myhtml.html", "r");
if (file) {
AsyncWebServerResponse* response = _request->beginChunkedResponse(
"text/html",
[file](uint8_t* buffer, size_t maxLen, size_t index) mutable -> size_t {
size_t bytesRead = file.read(buffer, maxLen);
if (bytesRead == 0) {
file.close();
LittleFS.remove("/myhtml.html");
}
return bytesRead;
}
);
for (const auto& header : _headers) {
response->addHeader(header.first, header.second);
}
_request->send(response);
_headers.clear();
} |
Beta Was this translation helpful? Give feedback.
-
I have started beginChunkedResponse missing request object discussions It has the code for download and this works for one file at a time , The Upload code:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm dynamically generating a large HTML page with a size of around 65 kB. My current approach looks like this:
However, when I increase the buffer to 70 kB, I encounter the following errors:
These errors repeat multiple times. Interestingly, the same page works (albeit slowly) when I use chunked responses via the web server.
I’m aware of the chunked response method:
Maybe I’m overcomplicating things, but I’m looking for a way to stream each _responseStream->print(_content); directly to the client in chunks—without needing to preassemble the full HTML string.
Any ideas or suggestions?
Beta Was this translation helpful? Give feedback.
All reactions