Releases: databricks/databricks-sql-nodejs
1.7.0
Highlights
- Fixed behavior of
maxRowsoption ofIOperation.fetchChunk(). Now it will return chunks of requested size (#200) - Improved CloudFetch memory usage and overall performance (#204, #207, #209)
- Remove protocol version check when using query parameters (#213)
- Fix
IOperation.hasMoreRows()behavior to avoid fetching data beyond the end of dataset. Also, now it will work properly prior to fetching first chunk (#205)
Full diff: 1.6.1...1.7.0
Query parameters support
In this release we also finally enable both named and ordinal query parameters support. Usage examples:
// obtain session object as usual
// Using named parameters
const operation = session.executeStatement('SELECT :p1 AS "str_param", :p2 AS "number_param"', {
namedParameters: {
p1: 'Hello, World',
p2: 3.14,
},
});
// Using ordinal parameters
const operation = session.executeStatement('SELECT ? AS "str_param", ? AS "number_param"', {
ordinalParameters: ['Hello, World', 3.14],
});Please note that either named or ordinal parameters can be used in the single query, but not both simultaneously
CloudFetch performance improvements
This release includes various improvements to CloudFetch feature. It remains disabled by default, but we strongly encourage you to start using it:
// obtain session object as usual
// Using named parameters
const operation = session.executeStatement('...', {
useCloudFetch: true,
});1.6.1
- Make default logger singleton (#199)
- Enable
canUseMultipleCatalogsoption when creating session (#203)
Full diff: 1.6.0...1.6.1
1.6.0
Highlights
- Added proxy support (#193)
- Added support for inferring NULL values passed as query parameters (#189)
- Fixed bug with NULL handling for Arrow results (#195)
Full diff: 1.5.0...1.6.0
Proxy support
This feature allows to pass through proxy all the requests library makes. By default, proxy is disabled.
To enable proxy, pass a configuration object to DBSQLClient.connect:
client.connect({
// pass host, path, auth options as usual
proxy: {
protocol: 'http', // supported protocols: 'http', 'https', 'socks', 'socks4', 'socks4a', 'socks5', 'socks5h'
host: 'localhost', // proxy host (string)
port: 8070, // proxy port (number)
auth: { // optional proxy basic auth config
username: ...
password: ...
},
},
})Note: using proxy settings from environment variables is currently not supported
1.5.0
Highlights
- Added OAuth M2M support (#168, #177)
- Added named query parameters support (#162, #175)
runAsyncoptions is now deprecated (#176)- Added staging ingestion support (#164)
Full diff: 1.4.0...1.5.0
Databricks OAuth support
Databricks OAuth support added in v1.4.0 is now extended with M2M flow. To use OAuth instead of PAT, pass
a corresponding auth provider type and options to DBSQLClient.connect:
// instantiate DBSQLClient as usual
client.connect({
// provide other mandatory options as usual - e.g. host, path, etc.
authType: 'databricks-oauth',
oauthClientId: '...', // optional - overwrite default OAuth client ID
azureTenantId: '...', // optional - provide custom Azure tenant ID
persistence: ..., // optional - user-provided storage for OAuth tokens, should implement OAuthPersistence interface
})U2M flow involves user interaction - the library will open a browser tab asking user to log in. To use this flow,
no other options are required except for authType.
M2M flow does not require any user interaction, and therefore is a good option, say, for scripting. To use this
flow, two extra options are required for DBSQLClient.connect: oauthClientId and oauthClientSecret.
Also see Databricks docs
for more details about Databricks OAuth.
Named query parameters
v1.5.0 adds a support for query parameters.
Currently only named parameters are supported.
Basic usage example:
// obtain session object as usual
const operation = session.executeStatement('SELECT :p1 AS "str_param", :p2 AS "number_param"', {
namedParameters: {
p1: 'Hello, World',
p2: 3.14,
},
});The library will infer parameter types from passed primitive objects. Supported data types include booleans, various
numeric types (including native BigInt and Int64 from node-int64), native Date type, and string.
It's also possible to explicitly specify the parameter type by passing DBSQLParameter instances instead of primitive
values. It also allows one to use values that don't have a corresponding primitive representation:
import { ..., DBSQLParameter, DBSQLParameterType } from '@databricks/sql';
// obtain session object as usual
const operation = session.executeStatement('SELECT :p1 AS "date_param", :p2 AS "interval_type"', {
namedParameters: {
p1: new DBSQLParameter({
value: new Date('2023-09-06T03:14:27.843Z'),
type: DBSQLParameterType.DATE, // by default, Date objects are inferred as TIMESTAMP, this allows to override the type
}),
p2: new DBSQLParameter({
value: 5, // INTERVAL '5' DAY
type: DBSQLParameterType.INTERVALDAY
}),
},
});Of course, you can mix primitive values and DBSQLParameter instances.
runAsync deprecation
Starting with this release, the library will execute all queries asynchronously, so we have deprecated
the runAsync option. It will be completely removed in v2. So you should not use it going forward and remove all
the usages from your code before version 2 is released. From user's perspective the library behaviour won't change.
Data ingestion support
This feature allows you to upload, retrieve, and remove unity catalog volume files using SQL PUT, GET and REMOVE commands.
1.4.0
- Added Cloud Fetch support (#158)
- Improved handling of closed sessions and operations (#129).
Now, when session gets closed, all operations associated with it are immediately closed.
Similarly, if client gets closed - all associated sessions (and their operations) are closed as well.
Full diff: 1.3.0...1.4.0
Notes:
Cloud Fetch is disabled by default. To use it, pass useCloudFetch: true to IDBSQLSession.executeStatement(). For example:
// obtain session object as usual
const operation = session.executeStatement(query, {
runAsync: true,
useCloudFetch: true,
});Note that Cloud Fetch is effectively enabled only for really large datasets, so if the query returns only few thousands records, Cloud Fetch won't be enabled no matter what useCloudFetch setting is. Also gentle reminder that for large datasets it's better to use fetchChunk instead of fetchAll to avoid OOM errors:
do {
const chunk = await operation.fetchChunk({ maxRows: 100000 });
// process chunk here
} while (await operation.hasMoreRows());1.3.0
1.2.1
1.2.0
1.1.1
Fix: patch needed for improved error handling wasn't applied when installing v1.1.0
1.1.0
What's Changed
- Fix: now library will not attempt to parse column names and will use ones provided by server
(#84) - Better error handling: more errors can now be handled in specific
.catch()handlers instead of being
emitted as a genericerrorevent (#99) - Fixed error logging bug (attempt to serialize circular structures) (#89)
- Fixed some minor bugs and regressions
Full Changelog: 1.0.0...1.1.0
Upgrading
No specific actions required. Revisit error handling code if you used an error event - now most errors will be handled in specific .catch() handlers