Skip to content

Commit e089b5c

Browse files
committed
cli: improve tsdump upload time
Previously, tsdump upload sub command had 10 workers which were uploading time series data to Datadog. The upload request had retry configuration with `100` max retries with max backoff of `2s`. This was resulting in high upload time for tsdump uploads. This patch updates worker count to `20` with max backoff of `100ms`. Epic: None Fixes: #146089 Release note: None
1 parent 9f30bc5 commit e089b5c

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

pkg/cli/tsdump_upload.go

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -387,7 +387,7 @@ func (d *datadogWriter) flush(data []DatadogSeries) error {
387387
}
388388

389389
retryOpts := base.DefaultRetryOptions()
390-
retryOpts.MaxBackoff = 2 * time.Second
390+
retryOpts.MaxBackoff = 100 * time.Millisecond
391391
retryOpts.MaxRetries = 100
392392
var req *http.Request
393393
for retry := retry.Start(retryOpts); retry.Next(); {
@@ -460,9 +460,9 @@ func (d *datadogWriter) upload(fileName string) error {
460460

461461
// Note(davidh): This was previously set at 1000 and we'd get regular
462462
// 400s from Datadog with the cryptic `Unable to decompress payload`
463-
// error. I reduced this to 10 and was able to upload a 1.65GB tsdump
464-
// in 3m10s without any errors (compared to 1m43s with 700 errors).
465-
for i := 0; i < 10; i++ {
463+
// error. We reduced this to 20 and was able to upload a 3.2GB tsdump
464+
// in 6m20s without any errors.
465+
for i := 0; i < 20; i++ {
466466
go func() {
467467
for data := range ch {
468468
emittedMetrics, err := d.emitDataDogMetrics(data)

0 commit comments

Comments
 (0)