fix(go): resolve intermittent EOF errors fetching large results (#192)#194
Merged
fix(go): resolve intermittent EOF errors fetching large results (#192)#194
Conversation
Fix intermittent "arrow/ipc: could not read continuation indicator: EOF" errors when fetching large result sets (30M+ rows) from Databricks. Root cause: The code called SchemaBytes() before any data fetch. In databricks-sql-go, when query results are large (no "direct results" in the response), the schema is populated lazily during the first Next() call. Calling SchemaBytes() before Next() returned empty bytes, causing the Arrow IPC reader to fail with EOF. The fix changes the initialization order to: 1. First call loadNextReader() which triggers data fetch 2. Get schema from the loaded IPC reader (which always has schema) 3. Fall back to SchemaBytes() only for empty result sets Fixes: #192 Co-Authored-By: Claude (databricks-claude-opus-4-5) <noreply@anthropic.com>
When custom TLS configuration is needed (sslCertPool or sslInsecure), the driver was creating a bare-bones http.Transport with only TLS settings, overriding databricks-sql-go's well-configured defaults. This could cause timeout issues during large result set downloads because the custom transport lacked: - Dial timeout and keep-alive settings - Idle connection timeout - Connection pool configuration Now the custom transport includes all the same settings as databricks-sql-go's PooledTransport: - DialTimeout: 30s - KeepAlive: 30s - IdleConnTimeout: 180s - TLSHandshakeTimeout: 10s - Connection pooling (MaxIdleConns: 100, MaxConnsPerHost: 100) These are sensible defaults matching the upstream library. Configuration options for these values can be added in a future PR if needed. Note: There is a separate upstream issue in databricks-sql-go where CloudFetch uses http.DefaultClient with no timeouts. That should be reported/fixed in the databricks-sql-go repository. Related to: #192 Co-Authored-By: Claude (databricks-claude-opus-4-5) <noreply@anthropic.com>
Contributor
|
I'll update the test. |
lidavidm
approved these changes
Jan 28, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🥞 Stacked PR
Use this link to review incremental changes.
Fix intermittent "arrow/ipc: could not read continuation indicator: EOF"
errors when fetching large result sets (30M+ rows) from Databricks.
Root cause: The code called SchemaBytes() before any data fetch. In
databricks-sql-go, when query results are large (no "direct results"
in the response), the schema is populated lazily during the first
Next() call. Calling SchemaBytes() before Next() returned empty bytes,
causing the Arrow IPC reader to fail with EOF.
The fix changes the initialization order to:
Fixes: #192
Co-Authored-By: Claude (databricks-claude-opus-4-5) noreply@anthropic.com